WorldWideScience

Sample records for maintenance audiovisual training

  1. 36 CFR 1237.20 - What are special considerations in the maintenance of audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... considerations in the maintenance of audiovisual records? 1237.20 Section 1237.20 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.20 What are special considerations in the maintenance of audiovisual...

  2. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Science.gov (United States)

    Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche

    2016-01-01

    Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  3. Audiovisual Interval Size Estimation Is Associated with Early Musical Training.

    Directory of Open Access Journals (Sweden)

    Mary Kathryn Abel

    Full Text Available Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.

  4. The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users

    Science.gov (United States)

    Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn

    2017-01-01

    This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants’ auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research. PMID:28348542

  5. Long-term music training modulates the recalibration of audiovisual simultaneity.

    Science.gov (United States)

    Jicol, Crescent; Proulx, Michael J; Pollick, Frank E; Petrini, Karin

    2018-07-01

    To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.

  6. Automated social skills training with audiovisual information.

    Science.gov (United States)

    Tanaka, Hiroki; Sakti, Sakriani; Neubig, Graham; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2016-08-01

    People with social communication difficulties tend to have superior skills using computers, and as a result computer-based social skills training systems are flourishing. Social skills training, performed by human trainers, is a well-established method to obtain appropriate skills in social interaction. Previous works have attempted to automate one or several parts of social skills training through human-computer interaction. However, while previous work on simulating social skills training considered only acoustic and linguistic features, human social skills trainers take into account visual features (e.g. facial expression, posture). In this paper, we create and evaluate a social skills training system that closes this gap by considering audiovisual features regarding ratio of smiling, yaw, and pitch. An experimental evaluation measures the difference in effectiveness of social skill training when using audio features and audiovisual features. Results showed that the visual features were effective to improve users' social skills.

  7. The contribution of perceptual factors and training on varying audiovisual integration capacity.

    Science.gov (United States)

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2018-06-01

    The suggestion that the capacity of audiovisual integration has an upper limit of 1 was challenged in 4 experiments using perceptual factors and training to enhance the binding of auditory and visual information. Participants were required to note a number of specific visual dot locations that changed in polarity when a critical auditory stimulus was presented, under relatively fast (200-ms stimulus onset asynchrony [SOA]) and slow (700-ms SOA) rates of presentation. In Experiment 1, transient cross-modal congruency between the brightness of polarity change and pitch of the auditory tone was manipulated. In Experiment 2, sustained chunking was enabled on certain trials by connecting varying dot locations with vertices. In Experiment 3, training was employed to determine if capacity would increase through repeated experience with an intermediate presentation rate (450 ms). Estimates of audiovisual integration capacity (K) were larger than 1 during cross-modal congruency at slow presentation rates (Experiment 1), during perceptual chunking at slow and fast presentation rates (Experiment 2), and, during an intermediate presentation rate posttraining (Experiment 3). Finally, Experiment 4 showed a linear increase in K using SOAs ranging from 100 to 600 ms, suggestive of quantitative rather than qualitative changes in the mechanisms in audiovisual integration as a function of presentation rate. The data compromise the suggestion that the capacity of audiovisual integration is limited to 1 and suggest that the ability to bind sounds to sights is contingent on individual and environmental factors. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. Training of maintenance personnel

    International Nuclear Information System (INIS)

    Rabouhams, J.

    1986-01-01

    This lecture precises the method and means developed by EDF to ensure the training of maintenance personnel according to their initial educational background and their experience. The following points are treated: General organization of the training for maintenance personnel in PWR and GCR nuclear power stations and in Creys Malville fast breeder reactor; Basic nuclear training and pedagogical aids developed for this purpose; Specific training and training provided by contractors; complementary training taking into account the operation experience and feedback; Improvement of velocity, competence and safety during shut-down operations by adapted training. (orig.)

  9. Maintenance training centre at NPP Paks, Hungary

    International Nuclear Information System (INIS)

    Babos, K.

    1996-01-01

    The lecture shows the feature of WWER-440/213 units maintenance, the existing maintenance training system, the necessity of the change in maintenance training system at NPP Paks. The author introduces the would-be maintenance training centre, the training facilities and the main tasks related to the maintenance training. (author)

  10. Interactive Football-Training Based on Rebounders with Hit Position Sensing and Audio-Visual Feedback

    DEFF Research Database (Denmark)

    Jensen, Mads Møller; Grønbæk, Kaj; Thomassen, Nikolaj

    2014-01-01

    . However, most of these tools are created with a single goal, either to measure or train, and are often used and tested in very controlled settings. In this paper, we present an interactive football-training platform, called Football Lab, featuring sensor- mounted rebounders as well as audio-visual...

  11. Audiovisual integration facilitates monkeys' short-term memory.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.

  12. Audio-Visual Feedback for Self-monitoring Posture in Ballet Training

    DEFF Research Database (Denmark)

    Knudsen, Esben Winther; Hølledig, Malte Lindholm; Bach-Nielsen, Sebastian Siem

    2017-01-01

    An application for ballet training is presented that monitors the posture position (straightness of the spine and rotation of the pelvis) deviation from the ideal position in real-time. The human skeletal data is acquired through a Microsoft Kinect v2. The movement of the student is mirrored......-coded. In an experiment with 9-12 year-old dance students from a ballet school, comparing the audio-visual feedback modality with no feedback leads to an increase in posture accuracy (p

  13. BWR Services maintenance training program

    International Nuclear Information System (INIS)

    Cox, J.H.; Chittenden, W.F.

    1979-01-01

    BWR Services has implemented a five-phase program to increase plant availability and capacity factor in operating BWR's. One phase of this program is establishing a maintenance training program on NSSS equipment; the scope encompasses maintenance on both mechanical equipment and electrical control and instrumentation equipment. The program utilizes actual product line equipment for practical Hands-on training. A total of 23 formal courses will be in place by the end of 1979. The General Electric Company is making a multimillion dollar investment in facilities to support this training. These facilities are described

  14. Maintenance training - a modern necessity

    International Nuclear Information System (INIS)

    Bushall, W.

    1987-01-01

    In recent years, there has been an increase in technically advanced systems and equipment and a need for highly skilled and knowledgeable maintenance technicians to maintain them. To implement an effective training program, training groups' and plant staffs' key to success must be cooperation and creativity. This paper deals with plant staff interface and how to effectively conduct performance-based training while holding the line on costs. This paper includes cost effective and innovative measures to produce performance-based training for maintenance disciplines including: Using the plants staff as a resource as subject matter experts in the development and verification of training materials. Using the plant staff as a resource for the construction of training aids. Using salvage and surplus to produce high quality, low cost training aids. Using cutaways for better understanding of he theory of equipment operation. These cost saving practices are currently being used at Gulf States Utilities' River Bend Nuclear Station

  15. Dynamic environment for training for maintenance

    International Nuclear Information System (INIS)

    Sanchez, F.; Gonzalez, F.; Marti, F.

    2001-01-01

    The governing board of TECNATOM approved a project for creating a maintenance training center in 1995. The objective was to cover training necessities identified in the maintenance area, mainly in issues related with continuous training, recycling and professional development. A team of instructors in the 3 specialties: mechanical, electrical and instrumentation, was selected. Written training material has been developed. New facilities and adequate mock-ups for training has been acquired, more than 100 didactical units have been developed. The mock-ups are real components from nuclear power plants, they have been adapted to fulfill the didactical function. New courses and mock-ups are being developed as new customer necessities are being identified. (A.C.)

  16. Dynamic environment for training for maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, F.; Gonzalez, F.; Marti, F. [Tecnatom, s.a., Madrid (Spain)

    2001-07-01

    The governing board of TECNATOM approved a project for creating a maintenance training center in 1995. The objective was to cover training necessities identified in the maintenance area, mainly in issues related with continuous training, recycling and professional development. A team of instructors in the 3 specialties: mechanical, electrical and instrumentation, was selected. Written training material has been developed. New facilities and adequate mock-ups for training has been acquired, more than 100 didactical units have been developed. The mock-ups are real components from nuclear power plants, they have been adapted to fulfill the didactical function. New courses and mock-ups are being developed as new customer necessities are being identified. (A.C.)

  17. Self-organizing maps for measuring similarity of audiovisual speech percepts

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    The goal of this work is to find a way to measure similarity of audiovisual speech percepts. Phoneme-related self-organizing maps (SOM) with a rectangular basis are trained with data material from a (labeled) video film. For the training, a combination of auditory speech features and corresponding....... Dependent on the training data, these other units may also be contextually immediate neighboring units. The poster demonstrates the idea with text material spoken by one individual subject using a set of simple audio-visual features. The data material for the training process consists of 44 labeled...... sentences in German with a balanced phoneme repertoire. As a result it can be stated that (i) the SOM can be trained to map auditory and visual features in a topology-preserving way and (ii) they show strain due to the influence of other audio-visual units. The SOM can be used to measure similarity amongst...

  18. Present status of the Nuclear Maintenance Training Center

    International Nuclear Information System (INIS)

    Kotani, Fumio

    1995-01-01

    The education and training to keep and improve the knowledge and skills of the maintenance personnel and to hand down the skills undoubtedly play important roles in safe operation and increased reliability to a nuclear power station. The Nuclear Maintenance Training Center (hereafter called the Center) provides a variety of education and training curriculums based on the levels and abilities of the trainees. The Center aims to enhance the personnel's maintenance technique by offering the curriculums on maintenance basic education for operators and supporting education and training for the personnel of contractors. The Center has two main features: first, it has the actual components or the equipment similar to the actual components which will enable the practical training; second, we regard the past troubles as valuable experiences and, therefore, focuses on the education to prevent recurrence of troubles by teaching the trainees the meaning and necessity of the training they take. For eleven years since the establishment of the Center, it has been utilized by the total number of about 60,000 people. As for the tasks in the future, the Center is expected to vitalize itself to give attractive education and training and become more actively involved in development of the maintenance personnel with the adequate knowledge and skills. (author)

  19. Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation.

    Science.gov (United States)

    Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina

    2017-01-01

    Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this "online" multisensory improvement, there is evidence of long-lasting, "offline" effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced "online" effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual

  20. Creating Value by Integrating Logistic Trains Services and Maintenance Activities

    NARCIS (Netherlands)

    Busstra, Marten; van Dongen, Leonardus Adriana Maria

    2015-01-01

    NedTrain is the Netherlands Railway's subsidiary responsible for rolling stock maintenance. Train sets are brought in for short-term routine maintenance after set intervals of some 75 to 120 days. When a major defect occurs, train sets are allocated to one of the three maintenance depots and are

  1. Neural initialization of audiovisual integration in prereaders at varying risk for developmental dyslexia.

    Science.gov (United States)

    I Karipidis, Iliana; Pleisch, Georgette; Röthlisberger, Martina; Hofstetter, Christoph; Dornbierer, Dario; Stämpfli, Philipp; Brem, Silvia

    2017-02-01

    Learning letter-speech sound correspondences is a major step in reading acquisition and is severely impaired in children with dyslexia. Up to now, it remains largely unknown how quickly neural networks adopt specific functions during audiovisual integration of linguistic information when prereading children learn letter-speech sound correspondences. Here, we simulated the process of learning letter-speech sound correspondences in 20 prereading children (6.13-7.17 years) at varying risk for dyslexia by training artificial letter-speech sound correspondences within a single experimental session. Subsequently, we acquired simultaneously event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) scans during implicit audiovisual presentation of trained and untrained pairs. Audiovisual integration of trained pairs correlated with individual learning rates in right superior temporal, left inferior temporal, and bilateral parietal areas and with phonological awareness in left temporal areas. In correspondence, a differential left-lateralized parietooccipitotemporal ERP at 400 ms for trained pairs correlated with learning achievement and familial risk. Finally, a late (650 ms) posterior negativity indicating audiovisual congruency of trained pairs was associated with increased fMRI activation in the left occipital cortex. Taken together, a short (audiovisual integration in neural systems that are responsible for processing linguistic information in proficient readers. To conclude, the ability to learn grapheme-phoneme correspondences, the familial history of reading disability, and phonological awareness of prereading children account for the degree of audiovisual integration in a distributed brain network. Such findings on emerging linguistic audiovisual integration could allow for distinguishing between children with typical and atypical reading development. Hum Brain Mapp 38:1038-1055, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals

  2. Historia audiovisual para una sociedad audiovisual

    Directory of Open Access Journals (Sweden)

    Julio Montero Díaz

    2013-04-01

    Full Text Available This article analyzes the possibilities of presenting an audiovisual history in a society in which audiovisual media has progressively gained greater protagonism. We analyze specific cases of films and historical documentaries and we assess the difficulties faced by historians to understand the keys of audiovisual language and by filmmakers to understand and incorporate history into their productions. We conclude that it would not be possible to disseminate history in the western world without audiovisual resources circulated through various types of screens (cinema, television, computer, mobile phone, video games.

  3. Audiovisual Capture with Ambiguous Audiovisual Stimuli

    Directory of Open Access Journals (Sweden)

    Jean-Michel Hupé

    2011-10-01

    Full Text Available Audiovisual capture happens when information across modalities get fused into a coherent percept. Ambiguous multi-modal stimuli have the potential to be powerful tools to observe such effects. We used such stimuli made of temporally synchronized and spatially co-localized visual flashes and auditory tones. The flashes produced bistable apparent motion and the tones produced ambiguous streaming. We measured strong interferences between perceptual decisions in each modality, a case of audiovisual capture. However, does this mean that audiovisual capture occurs before bistable decision? We argue that this is not the case, as the interference had a slow temporal dynamics and was modulated by audiovisual congruence, suggestive of high-level factors such as attention or intention. We propose a framework to integrate bistability and audiovisual capture, which distinguishes between “what” competes and “how” it competes (Hupé et al., 2008. The audiovisual interactions may be the result of contextual influences on neural representations (“what” competes, quite independent from the causal mechanisms of perceptual switches (“how” it competes. This framework predicts that audiovisual capture can bias bistability especially if modalities are congruent (Sato et al., 2007, but that is fundamentally distinct in nature from the bistable competition mechanism.

  4. Cognitive control during audiovisual working memory engages frontotemporal theta-band interactions.

    Science.gov (United States)

    Daume, Jonathan; Graetz, Sebastian; Gruber, Thomas; Engel, Andreas K; Friese, Uwe

    2017-10-03

    Working memory (WM) maintenance of sensory information has been associated with enhanced cross-frequency coupling between the phase of low frequencies and the amplitude of high frequencies, particularly in medial temporal lobe (MTL) regions. It has been suggested that these WM maintenance processes are controlled by areas of the prefrontal cortex (PFC) via frontotemporal phase synchronisation in low frequency bands. Here, we investigated whether enhanced cognitive control during audiovisual WM as compared to visual WM alone is associated with increased low-frequency phase synchronisation between sensory areas maintaining WM content and areas from PFC. Using magnetoencephalography, we recorded neural oscillatory activity from healthy human participants engaged in an audiovisual delayed-match-to-sample task. We observed that regions from MTL, which showed enhanced theta-beta phase-amplitude coupling (PAC) during the WM delay window, exhibited stronger phase synchronisation within the theta-band (4-7 Hz) to areas from lateral PFC during audiovisual WM as compared to visual WM alone. Moreover, MTL areas also showed enhanced phase synchronisation to temporooccipital areas in the beta-band (20-32 Hz). Our results provide further evidence that a combination of long-range phase synchronisation and local PAC might constitute a mechanism for neuronal communication between distant brain regions and across frequencies during WM maintenance.

  5. Proficiency evaluation of maintenance personnel: Training equivalency determination

    International Nuclear Information System (INIS)

    Price, W.J.

    1991-01-01

    The nuclear industry has recognized the importance of safe, quality, productive maintenance practices and has taken a number of initiatives that have generally improved maintenance programs. Because proficient maintenance practices are critical to plant safety and reliability, most plants have also recognized the need for reliable, valid testing techniques that demonstrate and assure the competence of their maintenance personnel. Until now, resource demands were too great to develop in-plant testing programs. In the past, maintenance supervisors have exempted personnel from training, using informal judgment of the employees' previous training and experience and informal observation of the employee on the job. While this procedure may have some degree of validity, it fails to provide the documentation for training equivalency that is required to satisfy the US Nuclear Regulatory Commission (NRC) regulations and Institute of Nuclear Power Operations (INPO) guidelines. To assess and demonstrate the proficiency levels of personnel, Calvert Cliffs needed to establish an objective, reliable, time-saving, and valid system to evaluate the competency levels of personnel. This was done in a joint effort with the Electric Power Research Institute

  6. Audiovisual speech facilitates voice learning.

    Science.gov (United States)

    Sheffert, Sonya M; Olson, Elizabeth

    2004-02-01

    In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

  7. On the relevance of script writing basics in audiovisual translation practice and training

    Directory of Open Access Journals (Sweden)

    Juan José Martínez-Sierra

    2012-07-01

    Full Text Available http://dx.doi.org/10.5007/2175-7968.2012v1n29p145   Audiovisual texts possess characteristics that clearly differentiate audiovisual translation from both oral and written translation, and prospective screen translators are usually taught about the issues that typically arise in audiovisual translation. This article argues for the development of an interdisciplinary approach that brings together Translation Studies and Film Studies, which would prepare future audiovisual translators to work with the nature and structure of a script in mind, in addition to the study of common and diverse translational aspects. Focusing on film, the article briefly discusses the nature and structure of scripts, and identifies key points in the development and structuring of a plot. These key points and various potential hurdles are illustrated with examples from the films Chinatown and La habitación de Fermat. The second part of this article addresses some implications for teaching audiovisual translation.

  8. The production of audiovisual teaching tools in minimally invasive surgery.

    Science.gov (United States)

    Tolerton, Sarah K; Hugh, Thomas J; Cosman, Peter H

    2012-01-01

    Audiovisual learning resources have become valuable adjuncts to formal teaching in surgical training. This report discusses the process and challenges of preparing an audiovisual teaching tool for laparoscopic cholecystectomy. The relative value in surgical education and training, for both the creator and viewer are addressed. This audiovisual teaching resource was prepared as part of the Master of Surgery program at the University of Sydney, Australia. The different methods of video production used to create operative teaching tools are discussed. Collating and editing material for an audiovisual teaching resource can be a time-consuming and technically challenging process. However, quality learning resources can now be produced even with limited prior video editing experience. With minimal cost and suitable guidance to ensure clinically relevant content, most surgeons should be able to produce short, high-quality education videos of both open and minimally invasive surgery. Despite the challenges faced during production of audiovisual teaching tools, these resources are now relatively easy to produce using readily available software. These resources are particularly attractive to surgical trainees when real time operative footage is used. They serve as valuable adjuncts to formal teaching, particularly in the setting of minimally invasive surgery. Copyright © 2012 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  9. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.

    Science.gov (United States)

    Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  10. Behavioral Parent Training in Child Welfare: Maintenance and Booster Training

    Science.gov (United States)

    Van Camp, Carole M.; Montgomery, Jan L.; Vollmer, Timothy R.; Kosarek, Judith A.; Happe, Shawn; Burgos, Vanessa; Manzolillo, Anthony

    2008-01-01

    Previous research has demonstrated the efficacy of a 30-hr behavioral parent training program at increasing skill accuracy. However, it remains unknown whether skills acquisitions are maintained on a long-term basis. Few studies have evaluated the maintenance of skills learned during behavioral parent training for foster parents. The purpose of…

  11. [Audio-visual communication in the history of psychiatry].

    Science.gov (United States)

    Farina, B; Remoli, V; Russo, F

    1993-12-01

    The authors analyse the evolution of visual communication in the history of psychiatry. From the 18th century oil paintings to the first dagherrotic prints until the cinematography and the modern audiovisual systems they observed an increasing diffusion of the new communication techniques in psychiatry, and described the use of the different techniques in psychiatric practice. The article ends with a brief review of the current applications of the audiovisual in therapy, training, teaching, and research.

  12. Expert-led didactic versus self-directed audiovisual training of confocal laser endomicroscopy in evaluation of mucosal barrier defects.

    Science.gov (United States)

    Huynh, Roy; Ip, Matthew; Chang, Jeff; Haifer, Craig; Leong, Rupert W

    2018-01-01

     Confocal laser endomicroscopy (CLE) allows mucosal barrier defects along the intestinal epithelium to be visualized in vivo during endoscopy. Training in CLE interpretation can be achieved didactically or through self-directed learning. This study aimed to compare the effectiveness of expert-led didactic with self-directed audiovisual teaching for training inexperienced analysts on how to recognize mucosal barrier defects on endoscope-based CLE (eCLE).  This randomized controlled study involved trainee analysts who were taught how to recognize mucosal barrier defects on eCLE either didactically or through an audiovisual clip. After being trained, they evaluated 6 sets of 30 images. Image evaluation required the trainees to determine whether specific features of barrier dysfunction were present or not. Trainees in the didactic group engaged in peer discussion and received feedback after each set while this did not happen in the self-directed group. Accuracy, sensitivity, and specificity of both groups were compared. Trainees in the didactic group achieved a higher overall accuracy (87.5 % vs 85.0 %, P  = 0.002) and sensitivity (84.5 % vs 80.4 %, P  = 0.002) compared to trainees in the self-directed group. Interobserver agreement was higher in the didactic group (k = 0.686, 95 % CI 0.680 - 0.691, P  barrier defects on eCLE.

  13. Virtual and augmented reality for training on maintenance

    International Nuclear Information System (INIS)

    Gonzalez, F.

    2001-01-01

    This paper presents two projects focused to support training on maintenance using new technologies. Both projects aims at specifying. designing, developing, and demonstrating prototypes allowing computer guided maintenance of complex mechanical elements using Virtual Reality (VIRMAN) and Augmented Reality (STARMATE) techniques. VIRMAN project is dedicated to training course development on maintenance using Virtual Reality. It based in the animation of three dimension images for component assembly/de-assembly or equipment movements. STARMATE will rely on Augmented Reality techniques which is a growing area in virtual Reality research. The idea of Augmented Reality is to combine a real scene, viewed by the user, with a virtual scene generated by a computer augmenting the reality with additional information. (Author)

  14. Audio-Visual Aids for Cooperative Education and Training.

    Science.gov (United States)

    Botham, C. N.

    Within the context of cooperative education, audiovisual aids may be used for spreading the idea of cooperatives and helping to consolidate study groups; for the continuous process of education, both formal and informal, within the cooperative movement; for constant follow up purposes; and for promoting loyalty to the movement. Detailed…

  15. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG.

    Directory of Open Access Journals (Sweden)

    Yao Lu

    Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  16. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG).

    Science.gov (United States)

    Lu, Yao; Paraskevopoulos, Evangelos; Herholz, Sibylle C; Kuchenbuch, Anja; Pantev, Christo

    2014-01-01

    Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.

  17. Susquehanna SES maintenance supervisor training and certification

    International Nuclear Information System (INIS)

    Deckman, M.

    1991-01-01

    Susquehanna's program targets all Supervisors, Supervisor Candidates, and Temporary Supervisors that are responsible for in-plant maintenance or maintenance support activities, including: mechanical maintenance; electrical maintenance; maintenance support (labor support, radwaste, etc.); mobile construction support (mechanical and electrical); chemistry; health physics; maintenance planning; and instrument and controls. The program integrates the three major areas of direct Supervisory responsibilities: (1) Leadership and Management - Skills that require interpersonal activities that are typically humanistic and subjective; such as coaching, motivating, communications, etc. (2) Technical and Administrative - Knowledge that is directly related to the job of Supervising from the production, regulatory, accountability perspective. These topics are very objective and include training on topics such as workpackages, plant chemistry parameters, radiological concerns, etc. (3) Technical Skills - Ensure each Supervisor is technically competent in the plant systems, components, or equipment he/she is tasked with maintaining or overseeing. Typical skills found in this area are, circuit breaker maintenance, primary system sampling, or overhauling pumps

  18. Innovative techniques in maintenance training

    International Nuclear Information System (INIS)

    Soileau, J.B.; Blackwell, C.; Rackos, N.; Elmer, L.B.

    1991-01-01

    Performance based training has its beginning with a Job and Task Analysis (JTA), and culminates with the presentation of the developed training material. For optimum training effectiveness, a post training feedback mechanism is utilized to update and/or upgrade material content. A Job Task Analysis uses subject matter experts and supporting documentation to define the skills and knowledge necessary to perform functions from total position responsibilities to the simplest tasks, depending on desired results. Once skills and knowledge are defined, decisions on needed curriculum can be made including objectives, exams, and training settings (classroom, lab, or job site). This focused curriculum and the determination of best training settings are innovations of the performanced based training system. Past training experience has illustrated that a baseline level of classroom training acts as a catalyst for optimum hands-on training. The trainee exits the training phase able to do the job proficiently. This provides the training customer with quality training with minimum investment. Maintenance Training at Comanche Peak Steam Electric Station (CPSES) integrates the knowledge and skill needs by providing the individual with focused classroom presentations followed by sufficient laboratory time to practice actual task activities. Labs are configured to represent plant equipment and work environment to the maximum extent possible and duplicate in-plant work orders and procedures are used. This allows actions to be analyzed and corrective feedback given as necessary. As an added benefit, any mistakes made during training has minimal, if any, impact on plant performance

  19. Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.

    Science.gov (United States)

    Banks, Briony; Gowen, Emma; Munro, Kevin J; Adank, Patti

    2015-01-01

    Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker's facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants' eye gaze was recorded to verify that they looked at the speaker's face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation.

  20. Copyright for audiovisual work and analysis of websites offering audiovisual works

    OpenAIRE

    Chrastecká, Nicolle

    2014-01-01

    This Bachelor's thesis deals with the matter of audiovisual piracy. It discusses the question of audiovisual piracy being caused not by the wrong interpretation of law but by the lack of competitiveness among websites with legal audiovisual content. This thesis questions the quality of legal interpretation in the matter of audiovisual piracy and focuses on its sufficiency. It analyses the responsibility of website providers, providers of the illegal content, the responsibility of illegal cont...

  1. 49 CFR 193.2713 - Training: operations and maintenance.

    Science.gov (United States)

    2010-10-01

    ... first-aid; and (3) All operating and appropriate supervisory personnel— (i) To understand detailed... 49 Transportation 3 2010-10-01 2010-10-01 false Training: operations and maintenance. 193.2713... LIQUEFIED NATURAL GAS FACILITIES: FEDERAL SAFETY STANDARDS Personnel Qualifications and Training § 193.2713...

  2. Control maintenance training program for special safety systems at Bruce B

    International Nuclear Information System (INIS)

    Reinwald, G.

    1997-01-01

    It was recognized from the early days of commissioning of Bruce B that Control Maintenance staff would require a level of expertise to be able to maintain Special Safety Systems in proper running order. In the early 80's this was achieved through hands on experience during the original commissioning, troubleshooting and placing of the various systems in service. Control maintenance procedures were developed and implemented as the new systems came available for commissioning, as were operating manuals,training manuals etc. Under the development of the Maintenance Manager, a Conduct of Maintenance section was organized. One of the responsibilities of this section was to develop a series of Maintenance Administrative Procedures (MAPs) that set the standards for maintenance activities including training

  3. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    Science.gov (United States)

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation

  4. Partial maintenance of auditory-based cognitive training benefits in older adults

    Science.gov (United States)

    Anderson, Samira; White-Schwoch, Travis; Choi, Hee Jae; Kraus, Nina

    2014-01-01

    The potential for short-term training to improve cognitive and sensory function in older adults has captured the public’s interest. Initial results have been promising. For example, eight weeks of auditory-based cognitive training decreases peak latencies and peak variability in neural responses to speech presented in a background of noise and instills gains in speed of processing, speech-in-noise recognition, and short-term memory in older adults. But while previous studies have demonstrated short-term plasticity in older adults, we must consider the long-term maintenance of training gains. To evaluate training maintenance, we invited participants from an earlier training study to return for follow-up testing six months after the completion of training. We found that improvements in response peak timing to speech in noise and speed of processing were maintained, but the participants did not maintain speech-in-noise recognition or memory gains. Future studies should consider factors that are important for training maintenance, including the nature of the training, compliance with the training schedule, and the need for booster sessions after the completion of primary training. PMID:25111032

  5. Training and information

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    The Training and Information Division provides centralized direction and coordination for the training and information activities of the Center for Energy and Environment Research (formerly Puerto Rico Nuclear Center). The Division Head serves as Educational Officer, Technical Information Officer, and Public Information Officer. Training responsibilities include registering students; maintaining centralized records on training activities; preparing reports for ERDA; scheduling the utilization of training facilities; providing audiovisual equipment; assisting in the preparation of courses, seminars, symposia, and meetings; administering fellowship programs; and providing personal assistance to students in matters such as housing and immigration. The Division Head represents the Director on the Admissions Committee. Information responsibilities include preparation of manuscripts for ERDA patent clearance and publication release, maintenance of central files on all manuscripts and publications, preparation of the Annual Report, providing editorial and translation assistance, operation of a Technical Reading Room, operation of an ERDA Film Library, operation of a Reproduction Shop, providing copying services, and assisting visitors

  6. Experience in the application of S.A.T. for maintenance training at Cernavoda N.P.P. - U1

    International Nuclear Information System (INIS)

    Erdinici, Abdula

    1999-01-01

    A short history of Maintenance Training at Cernavoda NPP Unit 1 will be presented highlighting the fact that: - Cernavoda NPP Unit 1 is the first nuclear power plant in Romania; - Construction/Commissioning and initial operation was done under the direct supervision of expert specialists (from Canada, Italy and US). In addition, the application of Systematic Approach to Training (S.A.T.) principles at Cernavoda NPP for all training activities will be addressed. A short history of how maintenance training activity developed over time will be detailed to address the following issues: - how the S.A.T. stages were applied; - how maintenance experience was gained during Unit 1 Construction/Commissioning initial operation and how this experience has been evaluated, credited and transferred; - how maintenance training was documented; - how the maintenance training activity is organized; - on-the-job training for maintenance personnel. Concerning other training activities at Cernavoda NPP the maintenance begins with a training needs analysis for each maintenance position. These needs are documented through Job Related Training requirements (JRTR's) produced for each maintenance position. During commissioning/initial operation, only necessary maintenance training has been delivered, such as: pump alignments, use of maintenance procedure, application of maintenance documentation. The 'hands-on' activities under expatriates specialists supervision was the main training activity. Training coordinators for each maintenance activity (Mechanical, EI. I-and-C, and Services maintenance) have been appointed to administer maintenance training. Following the declaration of the unit in commercial operation, a new approach has been taken related to maintenance Bucharest A Task Force to evaluate maintenance training status and experience has been established. This group was initiated at the Training Department initiative and it was initially co-ordinated by a Canadian maintenance

  7. First clinical implementation of audiovisual biofeedback in liver cancer stereotactic body radiation therapy

    International Nuclear Information System (INIS)

    Pollock, Sean; Tse, Regina; Martin, Darren

    2015-01-01

    This case report details a clinical trial's first recruited liver cancer patient who underwent a course of stereotactic body radiation therapy treatment utilising audiovisual biofeedback breathing guidance. Breathing motion results for both abdominal wall motion and tumour motion are included. Patient 1 demonstrated improved breathing motion regularity with audiovisual biofeedback. A training effect was also observed.

  8. Audio-visual biofeedback for respiratory-gated radiotherapy: Impact of audio instruction and audio-visual biofeedback on respiratory-gated radiotherapy

    International Nuclear Information System (INIS)

    George, Rohini; Chung, Theodore D.; Vedam, Sastry S.; Ramakrishnan, Viswanathan; Mohan, Radhe; Weiss, Elisabeth; Keall, Paul J.

    2006-01-01

    Purpose: Respiratory gating is a commercially available technology for reducing the deleterious effects of motion during imaging and treatment. The efficacy of gating is dependent on the reproducibility within and between respiratory cycles during imaging and treatment. The aim of this study was to determine whether audio-visual biofeedback can improve respiratory reproducibility by decreasing residual motion and therefore increasing the accuracy of gated radiotherapy. Methods and Materials: A total of 331 respiratory traces were collected from 24 lung cancer patients. The protocol consisted of five breathing training sessions spaced about a week apart. Within each session the patients initially breathed without any instruction (free breathing), with audio instructions and with audio-visual biofeedback. Residual motion was quantified by the standard deviation of the respiratory signal within the gating window. Results: Audio-visual biofeedback significantly reduced residual motion compared with free breathing and audio instruction. Displacement-based gating has lower residual motion than phase-based gating. Little reduction in residual motion was found for duty cycles less than 30%; for duty cycles above 50% there was a sharp increase in residual motion. Conclusions: The efficiency and reproducibility of gating can be improved by: incorporating audio-visual biofeedback, using a 30-50% duty cycle, gating during exhalation, and using displacement-based gating

  9. DOE handbook: Guide to good practices for training and qualification of maintenance personnel

    International Nuclear Information System (INIS)

    1996-03-01

    The purpose of this Handbook is to provide contractor training organizations with information that can be used to verify the adequacy of and/or modify existing maintenance training programs, or to develop new training programs. This guide, used in conjunction with facility-specific job analyses, provides a framework for training and qualification programs for maintenance personnel at DOE reactor and nonreactor nuclear facilities. Recommendations for qualification are made in four areas: education, experience, physical attributes, and training. The functional positions of maintenance mechanic, electrician, and instrumentation and control technician are covered by this guide. Sufficient common knowledge and skills were found to include the three disciplines in one guide to good practices. Contents include: qualifications; on-the-job training; trainee evaluation; continuing training; training effectiveness evaluation; and program records. Appendices are included which relate to: administrative training; industrial safety training; fundamentals training; tools and equipment training; facility systems and component knowledge training; facility systems and component skills training; and specialized skills training

  10. DOE handbook: Guide to good practices for training and qualification of maintenance personnel

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    The purpose of this Handbook is to provide contractor training organizations with information that can be used to verify the adequacy of and/or modify existing maintenance training programs, or to develop new training programs. This guide, used in conjunction with facility-specific job analyses, provides a framework for training and qualification programs for maintenance personnel at DOE reactor and nonreactor nuclear facilities. Recommendations for qualification are made in four areas: education, experience, physical attributes, and training. The functional positions of maintenance mechanic, electrician, and instrumentation and control technician are covered by this guide. Sufficient common knowledge and skills were found to include the three disciplines in one guide to good practices. Contents include: qualifications; on-the-job training; trainee evaluation; continuing training; training effectiveness evaluation; and program records. Appendices are included which relate to: administrative training; industrial safety training; fundamentals training; tools and equipment training; facility systems and component knowledge training; facility systems and component skills training; and specialized skills training.

  11. Development and introduction of a modular training and qualification concept for maintenance personnel

    International Nuclear Information System (INIS)

    Kumpf, Thomas; Mueller, Nina; Hofbauer, Detlef

    2009-01-01

    The presentation entitled ''Development and Introduction of a Modular Training and Qualification Concept for Maintenance Personnel'' presents the background, concept and training modules of a reactor services training concept that was jointly elaborated by a working group consisting of the ''Maintenance manager workshop'' working panel of the VGB, the Kraftwerksschule (KWS) PowerTech Training Center, and AREVA NP GmbH in Erlangen and Offenbach. This concept is part of AREVA's comprehensive training and qualification approach which comprises inter alia extensive introduction sessions for new employees, the corporation-wide highly appreciated AREVA University as well as specialized training facilities as for example the CETIC in Chalon-sur-Saone, France and the training center in Lynchburg, USA, or the worldwide largest integration center for digital I and C equipment for safety applications in nuclear power plants in Erlangen. AREVA NP's training concept for maintenance personnel addresses not only employees from AREVA NP working in nuclear maintenance but also personnel from nuclear power plants as well as independent experts, representatives from authorities and other interest groups. (orig.)

  12. Strategies for media literacy: Audiovisual skills and the citizenship in Andalusia

    Directory of Open Access Journals (Sweden)

    Ignacio Aguaded-Gómez

    2012-07-01

    Full Text Available Media consumption is an undeniable fact in present-day society. The hours that members of all social segments spend in front of a screen take up a large part of their leisure time worldwide. Audiovisual communication becomes especially important within the context of today’s digital society (society-network, where information and communication technologies pervade all corners of everyday life. However, people do not own enough audiovisual media skills to cope with this mass media omnipresence. Neither the education system nor civic associations, or the media themselves, have promoted audiovisual skills to make people critically competent when viewing media. This study aims to provide an updated conceptualization of the “audiovisual skill” in this digital environment and transpose it onto a specific interventional environment, seeking to detect needs and shortcomings, plan global strategies to be adopted by governments and devise training programmes for the various sectors involved.

  13. Attention to affective audio-visual information: Comparison between musicians and non-musicians

    NARCIS (Netherlands)

    Weijkamp, J.; Sadakata, M.

    2017-01-01

    Individuals with more musical training repeatedly demonstrate enhanced auditory perception abilities. The current study examined how these enhanced auditory skills interact with attention to affective audio-visual stimuli. A total of 16 participants with more than 5 years of musical training

  14. Audiovisual Temporal Processing and Synchrony Perception in the Rat.

    Science.gov (United States)

    Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L

    2016-01-01

    Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given

  15. Rehabilitation of balance-impaired stroke patients through audio-visual biofeedback

    DEFF Research Database (Denmark)

    Gheorghe, Cristina; Nissen, Thomas; Juul Rosengreen Christensen, Daniel

    2015-01-01

    This study explored how audio-visual biofeedback influences physical balance of seven balance-impaired stroke patients, between 33–70 years-of-age. The setup included a bespoke balance board and a music rhythm game. The procedure was designed as follows: (1) a control group who performed a balance...... training exercise without any technological input, (2) a visual biofeedback group, performing via visual input, and (3) an audio-visual biofeedback group, performing via audio and visual input. Results retrieved from comparisons between the data sets (2) and (3) suggested superior postural stability...

  16. Vicarious audiovisual learning in perfusion education.

    Science.gov (United States)

    Rath, Thomas E; Holt, David W

    2010-12-01

    Perfusion technology is a mechanical and visual science traditionally taught with didactic instruction combined with clinical experience. It is difficult to provide perfusion students the opportunity to experience difficult clinical situations, set up complex perfusion equipment, or observe corrective measures taken during catastrophic events because of patient safety concerns. Although high fidelity simulators offer exciting opportunities for future perfusion training, we explore the use of a less costly low fidelity form of simulation instruction, vicarious audiovisual learning. Two low fidelity modes of instruction; description with text and a vicarious, first person audiovisual production depicting the same content were compared. Students (n = 37) sampled from five North American perfusion schools were prospectively randomized to one of two online learning modules, text or video.These modules described the setup and operation of the MAQUET ROTAFLOW stand-alone centrifugal console and pump. Using a 10 question multiple-choice test, students were assessed immediately after viewing the module (test #1) and then again 2 weeks later (test #2) to determine cognition and recall of the module content. In addition, students completed a questionnaire assessing the learning preferences of today's perfusion student. Mean test scores from test #1 for video learners (n = 18) were significantly higher (88.89%) than for text learners (n = 19) (74.74%), (p audiovisual learning modules may be an efficacious, low cost means of delivering perfusion training on subjects such as equipment setup and operation. Video learning appears to improve cognition and retention of learned content and may play an important role in how we teach perfusion in the future, as simulation technology becomes more prevalent.

  17. Developing a comprehensive training curriculum for integrated predictive maintenance

    Science.gov (United States)

    Wurzbach, Richard N.

    2002-03-01

    On-line equipment condition monitoring is a critical component of the world-class production and safety histories of many successful nuclear plant operators. From addressing availability and operability concerns of nuclear safety-related equipment to increasing profitability through support system reliability and reduced maintenance costs, Predictive Maintenance programs have increasingly become a vital contribution to the maintenance and operation decisions of nuclear facilities. In recent years, significant advancements have been made in the quality and portability of many of the instruments being used, and software improvements have been made as well. However, the single most influential component of the success of these programs is the impact of a trained and experienced team of personnel putting this technology to work. Changes in the nature of the power generation industry brought on by competition, mergers, and acquisitions, has taken the historically stable personnel environment of power generation and created a very dynamic situation. As a result, many facilities have seen a significant turnover in personnel in key positions, including predictive maintenance personnel. It has become the challenge for many nuclear operators to maintain the consistent contribution of quality data and information from predictive maintenance that has become important in the overall equipment decision process. These challenges can be met through the implementation of quality training to predictive maintenance personnel and regular updating and re-certification of key technology holders. The use of data management tools and services aid in the sharing of information across sites within an operating company, and with experts who can contribute value-added data management and analysis. The overall effectiveness of predictive maintenance programs can be improved through the incorporation of newly developed comprehensive technology training courses. These courses address the use of

  18. Training experience at Experimental Breeder Reactor II

    International Nuclear Information System (INIS)

    Driscoll, J.W.; McCormick, R.P.; McCreery, H.I.

    1978-01-01

    The EBR-II Training Group develops, maintains,and oversees training programs and activities associated with the EBR-II Project. The group originally spent all its time on EBR-II plant-operations training, but has gradually spread its work into other areas. These other areas of training now include mechanical maintenance, fuel manufacturing facility, instrumentation and control, fissile fuel handling, and emergency activities. This report describes each of the programs and gives a statistical breakdown of the time spent by the Training Group for each program. The major training programs for the EBR-II Project are presented by multimedia methods at a pace controlled by the student. The Training Group has much experience in the use of audio-visual techniques and equipment, including video-tapes, 35 mm slides, Super 8 and 16 mm film, models, and filmstrips. The effectiveness of these techniques is evaluated in this report

  19. Education and training of operators and maintenance staff at Hamaoka Nuclear Power Stations

    International Nuclear Information System (INIS)

    Makido, Hideki; Hayashi, Haruhisa

    1999-01-01

    At Hamaoka Nuclear Power Station, in order to ensure higher safety and reliability of plant operation, education and training is provided consistently, on a comprehensive basis, for all operating, maintenance and other technical staff, aimed at developing more capable human resources in the nuclear power division. To this end, Hamaoka Nuclear Power Station has the 'Nuclear Training Center' on its site. The training center provides the technical personnel including operators and maintenance personnel with practical training, utilizing simulators for operation training and the identical facilities with those at the real plant. Thus, it plays a central role in promoting comprehensive education and training concerning nuclear power generation. Our education system covers knowledge and skills necessary for the safe and stable operation of nuclear power plant, targeting new employees to managerial personnel. It is also organized systematically in accordance with experience and job level. We will report the present education and training of operators and maintenance personnel at Hamaoka Nuclear Training Center. (author)

  20. Plant services (maintenance) foreman training. Inception to implementation

    International Nuclear Information System (INIS)

    Dunlap, M.S.

    1991-01-01

    Training content and time allocated for training have become essential and auditable commodities. This additional awareness, by upper management, has increased the pressure on training organizations to demonstrate effective and efficient programs. Structured program design and administration can assist training organizations in meeting these requirements and assuring a quality program. Sequential development of the job analysis, qualification standard, associated lesson plans, and a methodology for tracking program changes which affect the system, are all required components in a systematic approach to training. This paper addresses these facets in establishing a training program. It describes the methods utilized, problems identified and resolved as they occurred in the development of the Westinghouse Idaho Nuclear Company (WINCO) Plant Services (Maintenance) Foreman Training Program

  1. Augmented Reality Training for Assembly and Maintenance Skills

    Directory of Open Access Journals (Sweden)

    Preusche Carsten

    2011-12-01

    Full Text Available Augmented Reality (AR points out to be a good technology for training in the field of maintenance and assembly, as instructions or rather location-dependent information can be directly linked and/or attached to physical objects. Since objects to maintain usually contain a large number of similar components (e.g. screws, plugs, etc. the provision of location-dependent information is vitally important. Another advantage is that AR-based training takes place with the real physical devices of the training scenario. Thus, the trainee also practices the real use of the tools whereby the corresponding sensorimotor skills are trained.

  2. Digital audiovisual archives

    CERN Document Server

    Stockinger, Peter

    2013-01-01

    Today, huge quantities of digital audiovisual resources are already available - everywhere and at any time - through Web portals, online archives and libraries, and video blogs. One central question with respect to this huge amount of audiovisual data is how they can be used in specific (social, pedagogical, etc.) contexts and what are their potential interest for target groups (communities, professionals, students, researchers, etc.).This book examines the question of the (creative) exploitation of digital audiovisual archives from a theoretical, methodological, technical and practical

  3. ATTENTION MAINTENANCE IN NOVICE DRIVERS: ASSESSMENT AND TRAINING

    OpenAIRE

    Pradhan, Anuj; Masserang, Kathleen M.; Divekar, Gautam; Reagan, Ian; Thomas, F. Dennis; Blomberg, Richard; Pollatsek, Alexander; Fisher, Donald

    2009-01-01

    All programs assessing attention maintenance inside the vehicle have required eye trackers and either a driving simulator or a specially equipped field vehicle. Ideally, one would like a way to assess attention maintenance that could be implemented on a desktop PC. Additionally, one would like to have a program that could be used to train novice drivers to maintain their attention more safely on the forward roadway. An experiment was run (a) to determine whether a program FOCAL (Focused Conce...

  4. Automobile Starting and Lighting System Maintenance Training ...

    African Journals Online (AJOL)

    The purpose of this study is to develop automobile starting and lighting system maintenance training manual for technical college students. Research and Development (R and D) design was adopted for the study. The population of the study is 348, comprising of 76 auto-mechanics teachers, 36 automobile supervisors and ...

  5. Special training of craftsmen for maintenance and repair

    International Nuclear Information System (INIS)

    Scholz, H.E.

    1981-01-01

    The most important prerequisites for keeping nuclear power plants running safely and consistently are the reliable plant supervision and control by operators and shift engineers and its permanent servicing and maintenance as well as the performance of qualified repair work. Therefore, sophisticated training of qualified labour in the fields of mechanical engineering, electrical engineering and instrumentation and control is as important as of operators and shift engineers. The objectives of the relevant training measures for the personnel mentioned above are: Acquisition of solid basic skills in the respective professions, development of a broad background in how nuclear power plants are designed and function, gaining a sound insight into the working procedures in nuclear power plants, maintenance planning, duties and responsibilities of personnel involved, and acquiring thorough knowledge of the systems and its components, the assigned staff will be responsible for. (orig./RW)

  6. Simulation of machine-maintenance training in virtual environment

    International Nuclear Information System (INIS)

    Yoshikawa, Hidekazu; Tezuka, Tetsuo; Kashiwa, Ken-ichiro; Ishii, Hirotake

    1997-01-01

    The periodical inspection of nuclear power plants needs a lot of workforces with a high degree of technical skill for the maintenance of various sorts of machines. Therefore, a new type of maintenance training system is required, where trainees can get training safely, easily and effectively. In this study we developed a training simulation system for disassembling a check valve in virtual environment (VE). The features of this system are as follows: Firstly, the trainees can execute tasks even in wrong order, and can experience the resultant conditions. In order to realize this environment, we developed a new Petri-net model for representing the objects' states in VE. This Petri-net model has several original characteristics, which make it easier to manage the change of the objects' states. Furthermore, we made a support system for constructing the Petri-net model of machine-disassembling training, because the Petri-net model is apt to become of large size. The effectiveness of this support system is shown through the system development. Secondly, this system can perform appropriate tasks to be done next in VE whenever the trainee wants even after some mistakes have been made. The effectiveness of this function has also been confirmed by experiments. (author)

  7. Electrocortical Dynamics in Children with a Language-Learning Impairment Before and After Audiovisual Training.

    Science.gov (United States)

    Heim, Sabine; Choudhury, Naseem; Benasich, April A

    2016-05-01

    Detecting and discriminating subtle and rapid sound changes in the speech environment is a fundamental prerequisite of language processing, and deficits in this ability have frequently been observed in individuals with language-learning impairments (LLI). One approach to studying associations between dysfunctional auditory dynamics and LLI, is to implement a training protocol tapping into this potential while quantifying pre- and post-intervention status. Event-related potentials (ERPs) are highly sensitive to the brain correlates of these dynamic changes and are therefore ideally suited for examining hypotheses regarding dysfunctional auditory processes. In this study, ERP measurements to rapid tone sequences (standard and deviant tone pairs) along with behavioral language testing were performed in 6- to 9-year-old LLI children (n = 21) before and after audiovisual training. A non-treatment group of children with typical language development (n = 12) was also assessed twice at a comparable time interval. The results indicated that the LLI group exhibited considerable gains on standardized measures of language. In terms of ERPs, we found evidence of changes in the LLI group specifically at the level of the P2 component, later than 250 ms after the onset of the second stimulus in the deviant tone pair. These changes suggested enhanced discrimination of deviant from standard tone sequences in widespread cortices, in LLI children after training.

  8. For beginners in anaesthesia, self-training with an audiovisual checklist improves safety during anaesthesia induction: A prospective, randomised, controlled two-centre study.

    Science.gov (United States)

    Beck, Stefanie; Reich, Christian; Krause, Dorothea; Ruhnke, Bjarne; Daubmann, Anne; Weimann, Jörg; Zöllner, Christian; Kubitz, Jens

    2018-01-31

    Beginners in residency programmes in anaesthesia are challenged because working environment is complex, and they cannot rely on experience to meet challenges. During this early stage, residents need rules and structures to guide their actions and ensure patient safety. We investigated whether self-training with an electronic audiovisual checklist app on a mobile phone would produce a long-term improvement in the safety-relevant actions during induction of general anaesthesia. During the first month of their anaesthesia residency, we randomised 26 residents to the intervention and control groups. The study was performed between August 2013 and December 2014 in two university hospitals in Germany. In addition to normal training, the residents of the intervention group trained themselves on well tolerated induction using the electronic checklist for at least 60 consecutive general anaesthesia inductions. After an initial learning phase, all residents were observed during one induction of general anaesthesia. The primary outcome was the number of safety items completed during this anaesthesia induction. Secondary outcomes were similar observations 4 and 8 weeks later. Immediately, and 4 weeks after the first learning phase, residents in the intervention group completed a significantly greater number of safety checks than residents in the control group 2.8 [95% confidence interval (CI) 0.4 to 5.1, P = 0.021, Cohen's d = 0.47] and 3.7 (95% CI 1.3 to 6.1, P = 0.003, Cohen's d = 0.61), respectively. The difference between the groups had disappeared by 8 weeks: mean difference in the number of safety checks at 8 weeks was 0.4, 95% CI -2.0 to 2.8, P = 0.736, Cohen's d = 0.07). The use of an audiovisual self-training checklists improves safety-relevant behaviour in the early stages of a residency training programme in anaesthesia.

  9. Context-specific effects of musical expertise on audiovisual integration

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  10. Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.

    Science.gov (United States)

    Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu

    2018-05-01

    Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.

  11. Prediction-based Audiovisual Fusion for Classification of Non-Linguistic Vocalisations

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Prediction plays a key role in recent computational models of the brain and it has been suggested that the brain constantly makes multisensory spatiotemporal predictions. Inspired by these findings we tackle the problem of audiovisual fusion from a new perspective based on prediction. We train

  12. Multisensory integration in complete unawareness: evidence from audiovisual congruency priming.

    Science.gov (United States)

    Faivre, Nathan; Mudrik, Liad; Schwartz, Naama; Koch, Christof

    2014-11-01

    Multisensory integration is thought to require conscious perception. Although previous studies have shown that an invisible stimulus could be integrated with an audible one, none have demonstrated integration of two subliminal stimuli of different modalities. Here, pairs of identical or different audiovisual target letters (the sound /b/ with the written letter "b" or "m," respectively) were preceded by pairs of masked identical or different audiovisual prime digits (the sound /6/ with the written digit "6" or "8," respectively). In three experiments, awareness of the audiovisual digit primes was manipulated, such that participants were either unaware of the visual digit, the auditory digit, or both. Priming of the semantic relations between the auditory and visual digits was found in all experiments. Moreover, a further experiment showed that unconscious multisensory integration was not obtained when participants did not undergo prior conscious training of the task. This suggests that following conscious learning, unconscious processing suffices for multisensory integration. © The Author(s) 2014.

  13. A Study to Estimate the Effectiveness of Visual Testing Training for Aviation Maintenance Management

    Science.gov (United States)

    Law, Lewis Lyle

    2007-01-01

    The Air Commerce Act of 1926 set the beginning for standards in aviation maintenance. Even after deregulation in the late l970s, maintenance standards and requirements still have not changed far from their initial criteria. After a potential candidate completes Federal Aviation Administration training prerequisites, they may test for their Airframe and Powerplant (A&P) certificate. Performing maintenance in the aviation industry for a minimum of three years, the technician may then test for their Inspection Authorization (IA). After receiving their Airframe and Powerplant certificate, a technician is said to have a license to perform. At no time within the three years to eligibility for Inspection Authorization are they required to attend higher-level inspection training. What a technician learns in the aviation maintenance industry is handed down from a seasoned technician to the new hire or is developed from lessons learned on the job. Only in Europe has the Joint Aviation Authorities (JAA) required higher-level training for their aviation maintenance technicians in order to control maintenance related accidents (Lu, 2005). Throughout the 1990s both the General Accounting Office (GAO) and the National Transportation Safety Board (NTSB) made public that the FAA is historically understaffed (GAO, 1996). In a safety recommendation the NTSB stated "The Safety Board continues to lack confidence in the FAA's commitment to provide effective quality assurance and safety oversight of the ATC system (NTSB, 1990)." The Federal Aviation Administration (FAA) has been known to be proactive in creating safer skies. With such reports you would suspect the FAA to also be proactive in developing more stringent inspection training for aviation maintenance technicians. The purpose of this study is to estimate the effectiveness of higher-level inspection training, such as Visual Testing (VT) for aviation maintenance technicians, to improve the safety of aircraft and to make

  14. Competencies development and self-assessment in maintenance management e-training

    Science.gov (United States)

    Papathanassiou, Nikos; Pistofidis, Petros; Emmanouilidis, Christos

    2013-10-01

    The maintenance management function requires staff to possess a truly multidisciplinary set of skills. This includes competencies from engineering and information technology to health and safety, management and finance, while also taking into account the normative and legislative issues. This body of knowledge is rarely readily available within a single university course. The potential of e-learning in this field is significant, as it is a flexible and less costly alternative to conventional training. Furthermore, trainees can follow their own pace, as their available time is often a commodity. This article discusses the development of tools to support competencies development and self-assessment in maintenance management. Based on requirements arising from professional bodies' guidelines and a user survey, the developed tools implement a dedicated maintenance management training curriculum. The results from pilot testing on academic and industrial user groups are discussed and user evaluations are linked with specific e-learning design issues.

  15. Exploring Virtual Mental Practice in Maintenance Task Training

    Science.gov (United States)

    Bauerle, Tim; Brnich, Michael J.; Navoyski, Jason

    2016-01-01

    Purpose: This paper aims to contribute to a general understanding of mental practice by investigating the utility of and participant reaction to a virtual reality maintenance training among underground coal mine first responders. Design/Methodology/Approach: Researchers at the National Institute for Occupational Safety and Health's Office of Mine…

  16. Nuclear instrument maintenance and technical training in Nuclear Energy Unit

    International Nuclear Information System (INIS)

    Mohamad Nasir Abdul Wahid

    1987-01-01

    Instrument maintenance service is a necessity in a Nuclear Research Institute, such as the Nuclear Energy Unit (NEU) to ensure the smooth running of our research activities. However, realising that maintenance back-up service for either nuclear or other scientific equipment is a major problem in developing countries such as Malaysia, NEU has set up an Instrumentation and Control Department to assist in rectifying the maintenance problem. Beside supporting in house activities in NEU, the Instrumentation and Control Department (I and C) is also geared into providing services to other organisations in Malaysia. This paper will briefly outline the activities of NEU in nuclear instrument maintenance as well as in technical training. (author)

  17. Initial training and technology transfer during the generational transition in the personnel of maintenance

    International Nuclear Information System (INIS)

    Gonzalez Anez, F.

    2006-01-01

    A significant progress in training capabilities of nuclear power plants maintenance personnel has taken place since mid of nineties. In the past, maintenance personnel acquired their competence throughout the years on their job positions. A greater flexibility and new polyvalence requirements demand efficient training actions. In addition, the new personnel incomes associated to the generational change require clear qualification processes. The objective is to develop didactic means and to have competent instructors to preserve and to transfer the knowledge acquires during all these past years to the new incorporations. This article describes a summary of actions and methods followed for the design, development and implantation of training plans for maintenance personnel. (Author)

  18. 29 CFR 2.13 - Audiovisual coverage prohibited.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage prohibited. 2.13 Section 2.13 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.13 Audiovisual coverage prohibited. The Department shall not permit audiovisual coverage of the...

  19. Transfer of Skills Evaluation for Assembly and Maintenance Training

    Directory of Open Access Journals (Sweden)

    Peveri Matteo

    2011-12-01

    Full Text Available One of the research topics within the EU-project SKILLS1 was the training of Industrial Maintenance and Assembly (IMA tasks. The IMA demonstrator developed comprehends two different training platforms, one based on technologies of Virtual Reality (VR and the other one on Augmented Reality (AR. To qualify the efficiency of the developed training systems different studies have been conducted, followed by a final “Transfer of Skill” evaluation that has been performed by service technicians at the “SIDEL industrial training centre” in Parma. This evaluation included qualitative methods (feedback collection in questionnaires as well as quantitative methods (experiments with control groups. The results demonstrate that both platforms are useful and suitable training tools for IMA tasks, and that the AR training decreased the number of unsolved errors in the task.

  20. EHV network operation, maintenance, organization and training

    Energy Technology Data Exchange (ETDEWEB)

    Gravier, J P [Electricite de France (EDF), 75 - Paris (France)

    1994-12-31

    The service interruptions of electricity have an ever increasing social and industrial impact, it is thus fundamental to operate the network to its best level of performances. To face these changing conditions, Electricite de France has consequently adapted its strategy to improve its organization for maintenance and operation, clarify the operation procedures and give further training to the staff. This work presents the above mentioned issues. (author) 2 figs.

  1. 46 CFR 166.15 - Training for maintenance of discipline; ship sanitation; fire and lifeboat drills.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Training for maintenance of discipline; ship sanitation... maintenance of discipline; ship sanitation; fire and lifeboat drills. All students shall be trained to obey... the fundamentals of ship sanitation as prescribed by law and regulations, and shall be given intensive...

  2. Experiences on implementation of on-the-job training programmes for maintenance personnel in Asco and Vandellos II NPP

    International Nuclear Information System (INIS)

    Gonzalez Anez, F.

    2002-01-01

    This paper presents a process and methodology for definition and implementation of On-Job-Training Programmes (OJTP) for new maintenance personnel in Asco and Vandellos II. The content of these OJTP has been defined for each maintenance job position. A simplified task analysis was carried out to specify common and specific training. Generally, the specific maintenance training programs includes training modules in classroom and workshop environment on (1) maintenance of components and (2) maintenance fundamentals of mechanical, electrical and instrumentation. This specific training has been finally completed with a OJT programme based on the execution, observation or/and discussion about the main maintenance activities under entitled worker supervision. Each lesson, task or activity is defined in a format where the training objective, milestones and deliverables are specified. The list of activities makes up the OJTP. It is based on applicable plant procedures and maintenance instruction to each job position. Several participants or actors have been defined to implement the OJTP: co-ordinator of the process, tutors for each OJT task, line maintenance manager and trainee. Co-ordinator is the link among all actors. He knows the OJTP scope and plans the training activities according to the line maintenance manager. Co-ordinator carries out a tracking process, informs to training and maintenance managers about the progress in the programme, elaborates the progress and final reports and keeps training records. Tutors, usually entitled workers in the job position, transfer the knowledge to the trainee and discuss, review and assess the trainee's performance. Trainee carries out the scheduled tasks, keeps records of work done, prepares deliverables and informs about his activities to the Co-ordinator. The OJT programme for each new maintenance worker starts with a launching meeting with all involved actors. The goals of this meeting are to explain the OJTP scope and

  3. Plantilla 1: El documento audiovisual: elementos importantes

    OpenAIRE

    Alemany, Dolores

    2011-01-01

    Concepto de documento audiovisual y de documentación audiovisual, profundizando en la distinción de documentación de imagen en movimiento con posible incorporación de sonido frente al concepto de documentación audiovisual según plantea Jorge Caldera. Diferenciación entre documentos audiovisuales, obras audiovisuales y patrimonio audiovisual según Félix del Valle.

  4. Development of a wheelchair maintenance training programme and questionnaire for clinicians and wheelchair users.

    Science.gov (United States)

    Toro, Maria Luisa; Bird, Emily; Oyster, Michelle; Worobey, Lynn; Lain, Michael; Bucior, Samuel; Cooper, Rory A; Pearlman, Jonathan

    2017-11-01

    Purpose of state: The aims of this study were to develop a Wheelchair Maintenance Training Programme (WMTP) as a tool for clinicians to teach wheelchair users (and caregivers when applicable) in a group setting to perform basic maintenance at home in the USA and to develop a Wheelchair Maintenance Training Questionnaire (WMT-Q) to evaluate wheelchair maintenance knowledge in clinicians, manual and power wheelchair users. The WMTP and WMT-Q were developed through an iterative process. A convenience sample of clinicians (n = 17), manual wheelchair (n ∞ 5), power wheelchair users (n = 4) and caregivers (n = 4) provided feedback on the training programme. A convenience sample of clinicians (n = 38), manual wheelchair (n = 25), and power wheelchair users (n = 30) answered the WMT-Q throughout different phases of development. The subscores of the WMT-Q achieved a reliability that ranged between ICC(3,1) = 0.48 to ICC(3,1) = 0.89. The WMTP and WMT-Q were implemented with 15 clinicians who received in-person training in the USA using the materials developed and showed a significant increase in all except one of the WMT-Q subscores after the WMTP (p users. This training complements the World Health Organization basic wheelchair service curriculum, which only includes training of the clinicians, but does not include detailed information to train wheelchair users and caregivers. This training program offers a time efficient method for providing education to end users in a group setting that may mitigate adverse consequences resulting from wheelchair breakdown. This training program has significant potential for impact among wheelchair users in areas where access to repair services is limited.

  5. Group fellowship training in nuclear spectroscopy instrumentation maintenance at the Seibersdorf Laboratories

    International Nuclear Information System (INIS)

    Xie, Y.; Abdel-Rassoul, A.A.

    1989-01-01

    Nuclear spectroscopy instruments are important tools for nuclear research and applications. Several types of nuclear spectrometers are being sent to numerous laboratories in developing countries through technical co-operation projects. These are mostly sophisticated systems based on different radiation detectors, analogue and digital circuitry. In most cases, they use microprocessor or computer techniques involving software and hardware. Maintenance service and repair of these systems is a major problem in many developing countries because suppliers do not set up service stations. The Agency's Laboratories at Seibersdorf started conducting group fellowship training on nuclear spectroscopy instrumentation maintenance in 1987. This article describes the training programme

  6. Effects of a 14-month low-cost maintenance training program in patients with chronic systolic heart failure

    DEFF Research Database (Denmark)

    Prescott, Eva; Hjardem-Hansen, Rasmus; Ørkild, Bodil

    2009-01-01

    Exercise training is known to be beneficial in chronic heart failure (CHF) patients but there is a lack of studies following patient groups for longer duration with maintenance training programs to defer deconditioning.......Exercise training is known to be beneficial in chronic heart failure (CHF) patients but there is a lack of studies following patient groups for longer duration with maintenance training programs to defer deconditioning....

  7. Education and training of operators and maintenance staff at commercial nuclear power stations in Japan

    International Nuclear Information System (INIS)

    Takahashi, M.; Kataoka, H.

    1998-01-01

    Safe and stable operation of a nuclear power station requires personnel fostering. In Japan, with the objectives of systematically securing qualified people for a long period of time, and maintaining and improving their skills and knowledge, the utilities have created strict personnel training plans, for continuous education and training. Concrete examples of education and training for operators and maintenance personnel at commercial nuclear power stations in Japan, such as education systems training, facility and contents of curriculum, are detailed including some related matters. Recent activities to catch up with environment changes surrounding education and training of operators and maintenance staff are also mentioned. (author)

  8. Lessons learned from operating experience, maintenance procedures and training measures

    International Nuclear Information System (INIS)

    Guttner, K.; Gronau, D.

    2003-01-01

    Training programmes for nuclear facility personnel as a result of the developing phase of SAT have to be approved in the subsequent implementation and evaluation phases with the consequence of several feedback activities in the whole training process. The effectiveness of this procedure has to be evaluated especially with respect to an improvement of safety culture, shorter outage times or better plant performance, resulting in a smaller number of incidents due to human failures. The first two arguments are directly connected with all types of maintenance work in a nuclear power plant and the related preparatory training measures. The reduction of incidents due to human failures is the result of different influences, i.e. training of the operational as well as of the maintenance personnel together with changes of the operating procedures or system design. Though an evaluation of the training process should always be based on a clear definition of criteria by which the fulfilment of the learning objectives can be measured directly, the real effectiveness of training is proven by the behaviour and attitude of the personnel which can only be taken from indirect indicators. This is discussed in more detail for some examples being partly related to the above mentioned arguments. An excellent plant performance, representing a general objective of all activities, can be analysed by the changed number and reasons of incidents in a plant during its operation time. Two further examples are taken from the reactor service field where there is a tendency to reduce the individual dose rates by changed devices and/or procedures as an output from training experience with mockups. Finally the rationalisation of refresher training for operational personnel by the use of interactive teaching programs (Computer Based Training - CBT) is presented which integrate learning objectives together with a test module. (author)

  9. Audiovisual Blindsight: Audiovisual learning in the absence of primary visual cortex

    OpenAIRE

    Mehrdad eSeirafi; Peter eDe Weerd; Alan J Pegna; Beatrice ede Gelder

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit...

  10. Inventory-transportation integrated optimization for maintenance spare parts of high-speed trains

    Science.gov (United States)

    Wang, Jiaxi; Wang, Huasheng; Wang, Zhongkai; Li, Jian; Lin, Ruixi; Xiao, Jie; Wu, Jianping

    2017-01-01

    This paper presents a 0–1 programming model aimed at obtaining the optimal inventory policy and transportation mode for maintenance spare parts of high-speed trains. To obtain the model parameters for occasionally-replaced spare parts, a demand estimation method based on the maintenance strategies of China’s high-speed railway system is proposed. In addition, we analyse the shortage time using PERT, and then calculate the unit time shortage cost from the viewpoint of train operation revenue. Finally, a real-world case study from Shanghai Depot is conducted to demonstrate our method. Computational results offer an effective and efficient decision support for inventory managers. PMID:28472097

  11. The application of utility analysis processes to estimate the impact of training for nuclear maintenance personnel

    International Nuclear Information System (INIS)

    Groppel, C.F.

    1991-01-01

    The primary objectives of this study were to test two utility analysis models, the Cascio-Ramos Estimate of Performance in Dollars (CREPID) model and Godkewitsch financial utility analysis model and to determine their appropriateness as tools for evaluating training. This study was conducted in conjunction with Philadelphia Electric Company's Nuclear Training Group. Job performance of nuclear maintenance workers was assessed to document the impact of the training program. Assessment of job performance covered six job performance themes. Additionally, front-line nuclear maintenance supervisors were interviewed to determine their perceptions of the nuclear maintenance training. A comparison of supervisor's perceptions and outcomes of the utility analysis models was made to determine the appropriateness of utility analysis as quantitative tools for evaluating the nuclear maintenance training program. Application of the CREPID utility analysis model indicated the dollar value of the benefits of training through utility analysis was $5,843,750 which represented only four of the job performance themes. Application of the Godkewitsch utility analysis model indicated the dollar value of the benefits of training was $3,083,845 which represented all six performance themes. A comparison of the outcomes indicated a sizeable difference between the dollar values produced by the models. Supervisors indicated training resulted in improved productivity, i.e., improved efficiency and effectiveness. Additionally, supervisors believed training was valuable because it provided nonmonetary benefits, e.g., improved self-esteem and confidence. The application of utility analysis addressed only monetary benefits of training. The variation evidenced by the difference in the outcome of the two models suggests that utility analysis open-quotes estimatesclose quotes may not accurately reflect the impact of training

  12. Computer-Based Simulations for Maintenance Training: Current ARI Research. Technical Report 544.

    Science.gov (United States)

    Knerr, Bruce W.; And Others

    Three research efforts that used computer-based simulations for maintenance training were in progress when this report was written: Game-Based Learning, which investigated the use of computer-based games to train electronics diagnostic skills; Human Performance in Fault Diagnosis Tasks, which evaluated the use of context-free tasks to train…

  13. RECURSO AUDIOVISUAL PAA ENSEÑAR Y APRENDER EN EL AULA: ANÁLISIS Y PROPUESTA DE UN MODELO FORMATIVO

    Directory of Open Access Journals (Sweden)

    Damian Marilu Mendoza Zambrano

    2015-09-01

    Full Text Available La usabilidad de los recursos audiovisuales, gráficos y digitales, que en la actualidad se están introduciendo en el sistema educativo se despliega en varios países de la región como Chile, Colombia, México, Cuba, El Salvador, Uruguay y Venezuela. Se analiza y se justifica subtemas relacionados con la enseñanza de los medios, desde la iniciativa de España y Portugal; países que fueron convirtiéndose en protagonistas internacionales de algunos modelos educativos en el contexto universitario. Debido a la extensión y focalización en la informática y las redes de información y comunicación en la internet; el audiovisual como instrumento tecnológico va ganando espacios como un recurso dinámico e integrador; con características especiales que lo distingue del resto de los medios que conforman el ecosistema audiovisual. Como resultado de esta investigación se proponen dos líneas de aplicación: A. Propuesta del lenguaje icónico y audiovisual como objetivo de aprendizaje y/o materia curricular en los planes de estudio universitarios con talleres para el desarrollo del documento audiovisual, la fotografía digital y la producción audiovisual y B. Uso de los recursos audiovisuales como medio educativo, lo que implicaría un proceso previo de capacitación a la comunidad docente en actividades recomendadas al profesorado y alumnado respectivamente. En consecuencia, se presentan sugerencias que permiten implementar ambas líneas de acción académica.PALABRAS CLAVE: Alfabetización Mediática; Educación Audiovisual; Competencia Mediática; Educomunicación.AUDIOVISUAL RESOURCE FOR TEACHING AND LEARNING IN THE CLASSROOM: ANALYSIS AND PROPOSAL OF A TRAINING MODELABSTRACTThe usage of the graphic and digital audiovisual resources in Education that is been applied in the present, have displayed in countries such as Chile, Colombia, Mexico, Cuba, El Salvador, Uruguay, and Venezuela. The analysis and justification of the topics related to the

  14. Introducing the Interactive Model for the Training of Audiovisual Translators and Analysis of Multimodal Texts

    Directory of Open Access Journals (Sweden)

    Pietro Luigi Iaia

    2015-07-01

    Full Text Available Abstract – This paper introduces the ‘Interactive Model’ of audiovisual translation developed in the context of my PhD research on the cognitive-semantic, functional and socio-cultural features of the Italian-dubbing translation of a corpus of humorous texts. The Model is based on two interactive macro-phases – ‘Multimodal Critical Analysis of Scripts’ (MuCrAS and ‘Multimodal Re-Textualization of Scripts’ (MuReTS. Its construction and application are justified by a multidisciplinary approach to the analysis and translation of audiovisual texts, so as to focus on the linguistic and extralinguistic dimensions affecting both the reception of source texts and the production of target ones (Chaume 2004; Díaz Cintas 2004. By resorting to Critical Discourse Analysis (Fairclough 1995, 2001, to a process-based approach to translation and to a socio-semiotic analysis of multimodal texts (van Leeuwen 2004; Kress and van Leeuwen 2006, the Model is meant to be applied to the training of audiovisual translators and discourse analysts in order to help them enquire into the levels of pragmalinguistic equivalence between the source and the target versions. Finally, a practical application shall be discussed, detailing the Italian rendering of a comic sketch from the American late-night talk show Conan.Abstract – Questo studio introduce il ‘Modello Interattivo’ di traduzione audiovisiva sviluppato durante il mio dottorato di ricerca incentrato sulle caratteristiche cognitivo-semantiche, funzionali e socio-culturali della traduzione italiana per il doppiaggio di un corpus di testi comici. Il Modello è costituito da due fasi: la prima, di ‘Analisi critica e multimodale degli script’ (MuCrAS e la seconda, di ‘Ritestualizzazione critica e multimodale degli script’ (MuReTS, e la sua costruzione e applicazione sono frutto di un approccio multidisciplinare all’analisi e traduzione dei testi audiovisivi, al fine di esaminare le

  15. Integrating Safety in the Aviation System: Interdepartmental Training for Pilots and Maintenance Technicians

    Science.gov (United States)

    Mattson, Marifran; Petrin, Donald A.; Young, John P.

    2001-01-01

    The study of human factors has had a decisive impact on the aviation industry. However, the entire aviation system often is not considered in researching, training, and evaluating human factors issues especially with regard to safety. In both conceptual and practical terms, we argue for the proactive management of human error from both an individual and organizational systems perspective. The results of a multidisciplinary research project incorporating survey data from professional pilots and maintenance technicians and an exploratory study integrating students from relevant disciplines are reported. Survey findings suggest that latent safety errors may occur during the maintenance discrepancy reporting process because pilots and maintenance technicians do not effectively interact with one another. The importance of interdepartmental or cross-disciplinary training for decreasing these errors and increasing safety is discussed as a primary implication.

  16. 29 CFR 2.12 - Audiovisual coverage permitted.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Audiovisual coverage permitted. 2.12 Section 2.12 Labor Office of the Secretary of Labor GENERAL REGULATIONS Audiovisual Coverage of Administrative Hearings § 2.12 Audiovisual coverage permitted. The following are the types of hearings where the Department...

  17. Audiovisual preservation strategies, data models and value-chains

    OpenAIRE

    Addis, Matthew; Wright, Richard

    2010-01-01

    This is a report on preservation strategies, models and value-chains for digital file-based audiovisual content. The report includes: (a)current and emerging value-chains and business-models for audiovisual preservation;(b) a comparison of preservation strategies for audiovisual content including their strengths and weaknesses, and(c) a review of current preservation metadata models, and requirements for extension to support audiovisual files.

  18. Audiovisual segregation in cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Simon Landry

    Full Text Available It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition, as well as in normal controls. A visual speech recognition task (i.e. speechreading was administered either in silence or in combination with three types of auditory distractors: i noise ii reverse speech sound and iii non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.

  19. Community Building Services Training Program: A Model Training Program to Provide Technical Training for Minority Adults in Construction, Building Maintenance,and Property Management. Final Report.

    Science.gov (United States)

    Community Building Maintenance Corp., Chicago, IL.

    A demonstration program, administered by a community based building maintenance, management, and construction corporation, was developed to provide technical training for minority adults in construction, building maintenance, and property management in the Chicago area. The program was concerned with seeking solutions to the lack of housing, job…

  20. Assessment of rural soundscapes with high-speed train noise.

    Science.gov (United States)

    Lee, Pyoung Jik; Hong, Joo Young; Jeon, Jin Yong

    2014-06-01

    In the present study, rural soundscapes with high-speed train noise were assessed through laboratory experiments. A total of ten sites with varying landscape metrics were chosen for audio-visual recording. The acoustical characteristics of the high-speed train noise were analyzed using various noise level indices. Landscape metrics such as the percentage of natural features (NF) and Shannon's diversity index (SHDI) were adopted to evaluate the landscape features of the ten sites. Laboratory experiments were then performed with 20 well-trained listeners to investigate the perception of high-speed train noise in rural areas. The experiments consisted of three parts: 1) visual-only condition, 2) audio-only condition, and 3) combined audio-visual condition. The results showed that subjects' preference for visual images was significantly related to NF, the number of land types, and the A-weighted equivalent sound pressure level (LAeq). In addition, the visual images significantly influenced the noise annoyance, and LAeq and NF were the dominant factors affecting the annoyance from high-speed train noise in the combined audio-visual condition. In addition, Zwicker's loudness (N) was highly correlated with the annoyance from high-speed train noise in both the audio-only and audio-visual conditions. © 2013.

  1. The Education, Audiovisual and Culture Executive Agency: Helping You Grow Your Project

    Science.gov (United States)

    Education, Audiovisual and Culture Executive Agency, European Commission, 2011

    2011-01-01

    The Education, Audiovisual and Culture Executive Agency (EACEA) is a public body created by a Decision of the European Commission and operates under its supervision. It is located in Brussels and has been operational since January 2006. Its role is to manage European funding opportunities and networks in the fields of education and training,…

  2. Inactivation of Primate Prefrontal Cortex Impairs Auditory and Audiovisual Working Memory.

    Science.gov (United States)

    Plakke, Bethany; Hwang, Jaewon; Romanski, Lizabeth M

    2015-07-01

    The prefrontal cortex is associated with cognitive functions that include planning, reasoning, decision-making, working memory, and communication. Neurophysiology and neuropsychology studies have established that dorsolateral prefrontal cortex is essential in spatial working memory while the ventral frontal lobe processes language and communication signals. Single-unit recordings in nonhuman primates has shown that ventral prefrontal (VLPFC) neurons integrate face and vocal information and are active during audiovisual working memory. However, whether VLPFC is essential in remembering face and voice information is unknown. We therefore trained nonhuman primates in an audiovisual working memory paradigm using naturalistic face-vocalization movies as memoranda. We inactivated VLPFC, with reversible cortical cooling, and examined performance when faces, vocalizations or both faces and vocalization had to be remembered. We found that VLPFC inactivation impaired subjects' performance in audiovisual and auditory-alone versions of the task. In contrast, VLPFC inactivation did not disrupt visual working memory. Our studies demonstrate the importance of VLPFC in auditory and audiovisual working memory for social stimuli but suggest a different role for VLPFC in unimodal visual processing. The ventral frontal lobe, or inferior frontal gyrus, plays an important role in audiovisual communication in the human brain. Studies with nonhuman primates have found that neurons within ventral prefrontal cortex (VLPFC) encode both faces and vocalizations and that VLPFC is active when animals need to remember these social stimuli. In the present study, we temporarily inactivated VLPFC by cooling the cortex while nonhuman primates performed a working memory task. This impaired the ability of subjects to remember a face and vocalization pair or just the vocalization alone. Our work highlights the importance of the primate VLPFC in the processing of faces and vocalizations in a manner that

  3. Effect of Different Training Methods on Stride Parameters in Speed Maintenance Phase of 100-m Sprint Running.

    Science.gov (United States)

    Cetin, Emel; Hindistan, I Ethem; Ozkaya, Y Gul

    2018-05-01

    Cetin, E, Hindistan, IE, Ozkaya, YG. Effect of different training methods on stride parameters in speed maintenance phase of 100-m sprint running. J Strength Cond Res 32(5): 1263-1272, 2018-This study examined the effects of 2 different training methods relevant to sloping surface on stride parameters in speed maintenance phase of 100-m sprint running. Twenty recreationally active students were assigned into one of 3 groups: combined training (Com), horizontal training (H), and control (C) group. Com group performed uphill and downhill training on a sloping surface with an angle of 4°, whereas H group trained on a horizontal surface, 3 days a week for 8 weeks. Speed maintenance and deceleration phases were divided into distances with 10-m intervals, and running time (t), running velocity (RV), step frequency (SF), and step length (SL) were measured at preexercise, and postexercise period. After 8 weeks of training program, t was shortened by 3.97% in Com group, and 2.37% in H group. Running velocity also increased for totally 100 m of running distance by 4.13 and 2.35% in Com, and H groups, respectively. At the speed maintenance phase, although t and maximal RV (RVmax) found to be statistically unaltered during overall phase, t was found to be decreased, and RVmax was preceded by 10 m in distance in both training groups. Step length was increased at 60-70 m, and SF was decreased at 70-80 m in H group. Step length was increased with concomitant decrease in SF at 80-90 m in Com group. Both training groups maintained the RVmax with a great percentage at the speed maintenance phase. In conclusion, although both training methods resulted in an increase in running time and RV, Com training method was more prominently effective method in improving RV, and this improvement was originated from the positive changes in SL during the speed maintaining phase.

  4. Audiovisual Styling and the Film Experience

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2015-01-01

    Approaches to music and audiovisual meaning in film appear to be very different in nature and scope when considered from the point of view of experimental psychology or humanistic studies. Nevertheless, this article argues that experimental studies square with ideas of audiovisual perception...... and meaning in humanistic film music studies in two ways: through studies of vertical synchronous interaction and through studies of horizontal narrative effects. Also, it is argued that the combination of insights from quantitative experimental studies and qualitative audiovisual film analysis may actually...... be combined into a more complex understanding of how audiovisual features interact in the minds of their audiences. This is demonstrated through a review of a series of experimental studies. Yet, it is also argued that textual analysis and concepts from within film and music studies can provide insights...

  5. Audiovisual speech perception development at varying levels of perceptual processing

    OpenAIRE

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-01-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the le...

  6. Experience with training of operating and maintenance personnel of nuclear power plants

    International Nuclear Information System (INIS)

    Pospisil, M.; Cencinger, F.

    1988-01-01

    The system is described of the specialist training of personnel for Czechoslovak nuclear power plants. Training consists of basic training, vocational training and training for the respective job. Responsible for the training is the Research Institute for Nuclear Power Plants; actual training takes place at three training centres. Personnel are divided into seven categories for training purposes: senior technical and economic staff, shift leaders, whose work has immediate effect on nuclear safety, engineering and technical personnel of technical units, shift leaders of technical units, personnel in technical units, shift service personnel and operating personnel, maintenance workers. Experience with training courses run at the training centre is summed up. Since 1980 the Centre has been training personnel mainly for the Dukovany nuclear power plant. Recommendations are presented for training personnel for the Temelin nuclear power plant. (Z.M.)

  7. La regulación audiovisual: argumentos a favor y en contra The audio-visual regulation: the arguments for and against

    Directory of Open Access Journals (Sweden)

    Jordi Sopena Palomar

    2008-03-01

    Full Text Available El artículo analiza la efectividad de la regulación audiovisual y valora los diversos argumentos a favor y en contra de la existencia de consejos reguladores a nivel estatal. El debate sobre la necesidad de un organismo de este calado en España todavía persiste. La mayoría de los países comunitarios se han dotado de consejos competentes en esta materia, como es el caso del OFCOM en el Reino Unido o el CSA en Francia. En España, la regulación audiovisual se limita a organismos de alcance autonómico, como son el Consejo Audiovisual de Navarra, el de Andalucía y el Consell de l’Audiovisual de Catalunya (CAC, cuyo modelo también es abordado en este artículo. The article analyzes the effectiveness of the audio-visual regulation and assesses the different arguments for and against the existence of the broadcasting authorities at the state level. The debate of the necessity of a Spanish organism of regulation is still active. Most of the European countries have created some competent authorities, like the OFCOM in United Kingdom and the CSA in France. In Spain, the broadcasting regulation is developed by regional organisms, like the Consejo Audiovisual de Navarra, the Consejo Audiovisual de Andalucía and the Consell de l’Audiovisual de Catalunya (CAC, whose case is also studied in this article.

  8. A cross-sectional study of hearing thresholds among 4627 Norwegian train and track maintenance workers.

    Science.gov (United States)

    Lie, Arve; Skogstad, Marit; Johnsen, Torstein Seip; Engdahl, Bo; Tambs, Kristian

    2014-10-16

    Railway workers performing maintenance work of trains and tracks could be at risk of developing noise-induced hearing loss, since they are exposed to noise levels of 75-90 dB(A) with peak exposures of 130-140 dB(C). The objective was to make a risk assessment by comparing the hearing thresholds among train and track maintenance workers with a reference group not exposed to noise and reference values from the ISO 1999. Cross-sectional. A major Norwegian railway company. 1897 and 2730 male train and track maintenance workers, respectively, all exposed to noise, and 2872 male railway traffic controllers and office workers not exposed to noise. The primary outcome was the hearing threshold (pure tone audiometry, frequencies from 0.5 to 8 kHz), and the secondary outcome was the prevalence of audiometric notches (Coles notch) of the most recent audiogram. Train and track maintenance workers aged 45 years or older had a small mean hearing loss in the 3-6 kHz area of 3-5 dB. The hearing loss was less among workers younger than 45 years. Audiometric notches were slightly more prevalent among the noise exposed (59-64%) group compared with controls (49%) for all age groups. They may therefore be a sensitive measure in disclosing an early hearing loss at a group level. Train and track maintenance workers aged 45 years or older, on average, have a slightly greater hearing loss and more audiometric notches compared with reference groups not exposed to noise. Younger (<45 years) workers have hearing thresholds comparable to the controls. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. Audiovisual perception in amblyopia: A review and synthesis.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-05-17

    Amblyopia is a common developmental sensory disorder that has been extensively and systematically investigated as a unisensory visual impairment. However, its effects are increasingly recognized to extend beyond vision to the multisensory domain. Indeed, amblyopia is associated with altered cross-modal interactions in audiovisual temporal perception, audiovisual spatial perception, and audiovisual speech perception. Furthermore, although the visual impairment in amblyopia is typically unilateral, the multisensory abnormalities tend to persist even when viewing with both eyes. Knowledge of the extent and mechanisms of the audiovisual impairments in amblyopia, however, remains in its infancy. This work aims to review our current understanding of audiovisual processing and integration deficits in amblyopia, and considers the possible mechanisms underlying these abnormalities. Copyright © 2018. Published by Elsevier Ltd.

  10. Evaluating Virtual Reality and Augmented Reality Training for Industrial Maintenance and Assembly Tasks

    Science.gov (United States)

    Gavish, Nirit; Gutiérrez, Teresa; Webel, Sabine; Rodríguez, Jorge; Peveri, Matteo; Bockholt, Uli; Tecchia, Franco

    2015-01-01

    The current study evaluated the use of virtual reality (VR) and augmented reality (AR) platforms, developed within the scope of the SKILLS Integrated Project, for industrial maintenance and assembly (IMA) tasks training. VR and AR systems are now widely regarded as promising training platforms for complex and highly demanding IMA tasks. However,…

  11. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  12. Decreased BOLD responses in audiovisual processing

    NARCIS (Netherlands)

    Wiersinga-Post, Esther; Tomaskovic, Sonja; Slabu, Lavinia; Renken, Remco; de Smit, Femke; Duifhuis, Hendrikus

    2010-01-01

    Audiovisual processing was studied in a functional magnetic resonance imaging study using the McGurk effect. Perceptual responses and the brain activity patterns were measured as a function of audiovisual delay. In several cortical and subcortical brain areas, BOLD responses correlated negatively

  13. Learning sparse generative models of audiovisual signals

    OpenAIRE

    Monaci, Gianluca; Sommer, Friedrich T.; Vandergheynst, Pierre

    2008-01-01

    This paper presents a novel framework to learn sparse represen- tations for audiovisual signals. An audiovisual signal is modeled as a sparse sum of audiovisual kernels. The kernels are bimodal functions made of synchronous audio and video components that can be positioned independently and arbitrarily in space and time. We design an algorithm capable of learning sets of such audiovi- sual, synchronous, shift-invariant functions by alternatingly solving a coding and a learning pr...

  14. Audiovisual Discrimination between Laughter and Speech

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audiovisual approach to distinguishing laughter from speech and we show that integrating the information from audio and video leads to an improved reliability of audiovisual approach in

  15. Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.

    2007-01-01

    Laughter is a highly variable signal, and can express a spectrum of emotions. This makes the automatic detection of laughter a challenging but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is performed

  16. The application of systematic analysis to the development for maintenance staffs training contents in Nuclear Power Station

    International Nuclear Information System (INIS)

    Ishida, Takahisa; Maruo, Tadashi; Kurokawa, Kazuya

    2005-01-01

    To survive the tide of electric power industry deregulation, actions for streamlining our operations must be compatible with safe of plant operation. With regard to the human resource issue, retirement of first line engineers who developed their practical technical skills through the process of experiencing numerous problems or plant construction can raise concerns regarding a decline in our engineering abilities. Under these circumstances, to prepare sophisticated maintenance engineers, training programs must be optimized by considering the most effective and efficient method and material. Despite the IAEA's SAT (Systematic Approach to Training) method being widely applied to train nuclear power plants operators, there are few reports that it is applied to maintenance engineers. This paper will discuss our attempt to introduce more effective and efficient training for maintenance engineers, as well as refer to the SAT method to analyze the education program as a whole. (author)

  17. 49 CFR 214.355 - Training and qualification in on-track safety for operators of roadway maintenance machines.

    Science.gov (United States)

    2010-10-01

    ... operators of roadway maintenance machines. 214.355 Section 214.355 Transportation Other Regulations Relating... operators of roadway maintenance machines. (a) The training and qualification of roadway workers who operate roadway maintenance machines shall include, as a minimum: (1) Procedures to prevent a person from being...

  18. Lip movements affect infants' audiovisual speech perception.

    Science.gov (United States)

    Yeung, H Henny; Werker, Janet F

    2013-05-01

    Speech is robustly audiovisual from early in infancy. Here we show that audiovisual speech perception in 4.5-month-old infants is influenced by sensorimotor information related to the lip movements they make while chewing or sucking. Experiment 1 consisted of a classic audiovisual matching procedure, in which two simultaneously displayed talking faces (visual [i] and [u]) were presented with a synchronous vowel sound (audio /i/ or /u/). Infants' looking patterns were selectively biased away from the audiovisual matching face when the infants were producing lip movements similar to those needed to produce the heard vowel. Infants' looking patterns returned to those of a baseline condition (no lip movements, looking longer at the audiovisual matching face) when they were producing lip movements that did not match the heard vowel. Experiment 2 confirmed that these sensorimotor effects interacted with the heard vowel, as looking patterns differed when infants produced these same lip movements while seeing and hearing a talking face producing an unrelated vowel (audio /a/). These findings suggest that the development of speech perception and speech production may be mutually informative.

  19. "Business Continuity and Information Security Maintenance" Masters’ Training Program

    OpenAIRE

    Miloslavskaya , Natalia; Senatorov , Mikhail; Tolstoy , Alexandr; Zapechnikov , Sergei

    2013-01-01

    Part 1: WISE 8; International audience; The experience of preparing for the "Business Continuity and Information Security Maintenance" (BC&ISM) Masters’ program implementation and realization at the "Information Security of Banking Systems" Department of the National Research Nuclear University MEPhI (NRNU MEPhI, Moscow, Russia) is presented. Justification of the educational direction choice for BC&ISM professionals is given. The model of IS Master being trained on this program is described. ...

  20. The level of audiovisual print-speech integration deficits in dyslexia.

    Science.gov (United States)

    Kronschnabel, Jens; Brem, Silvia; Maurer, Urs; Brandeis, Daniel

    2014-09-01

    The classical phonological deficit account of dyslexia is increasingly linked to impairments in grapho-phonological conversion, and to dysfunctions in superior temporal regions associated with audiovisual integration. The present study investigates mechanisms of audiovisual integration in typical and impaired readers at the critical developmental stage of adolescence. Congruent and incongruent audiovisual as well as unimodal (visual only and auditory only) material was presented. Audiovisual presentations were single letters and three-letter (consonant-vowel-consonant) stimuli accompanied by matching or mismatching speech sounds. Three-letter stimuli exhibited fast phonetic transitions as in real-life language processing and reading. Congruency effects, i.e. different brain responses to congruent and incongruent stimuli were taken as an indicator of audiovisual integration at a phonetic level (grapho-phonological conversion). Comparisons of unimodal and audiovisual stimuli revealed basic, more sensory aspects of audiovisual integration. By means of these two criteria of audiovisual integration, the generalizability of audiovisual deficits in dyslexia was tested. Moreover, it was expected that the more naturalistic three-letter stimuli are superior to single letters in revealing group differences. Electrophysiological and hemodynamic (EEG and fMRI) data were acquired simultaneously in a simple target detection task. Applying the same statistical models to event-related EEG potentials and fMRI responses allowed comparing the effects detected by the two techniques at a descriptive level. Group differences in congruency effects (congruent against incongruent) were observed in regions involved in grapho-phonological processing, including the left inferior frontal and angular gyri and the inferotemporal cortex. Importantly, such differences also emerged in superior temporal key regions. Three-letter stimuli revealed stronger group differences than single letters. No

  1. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan.

    Science.gov (United States)

    Noel, Jean-Paul; De Niear, Matthew; Van der Burg, Erik; Wallace, Mark T

    2016-01-01

    Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.

  2. SU-E-J-29: Audiovisual Biofeedback Improves Tumor Motion Consistency for Lung Cancer Patients

    International Nuclear Information System (INIS)

    Lee, D; Pollock, S; Makhija, K; Keall, P; Greer, P; Arm, J; Hunter, P; Kim, T

    2014-01-01

    Purpose: To investigate whether the breathing-guidance system: audiovisual (AV) biofeedback improves tumor motion consistency for lung cancer patients. This will minimize respiratory-induced tumor motion variations across cancer imaging and radiotherapy procedues. This is the first study to investigate the impact of respiratory guidance on tumor motion. Methods: Tumor motion consistency was investigated with five lung cancer patients (age: 55 to 64), who underwent a training session to get familiarized with AV biofeedback, followed by two MRI sessions across different dates (pre and mid treatment). During the training session in a CT room, two patient specific breathing patterns were obtained before (Breathing-Pattern-1) and after (Breathing-Pattern-2) training with AV biofeedback. In each MRI session, four MRI scans were performed to obtain 2D coronal and sagittal image datasets in free breathing (FB), and with AV biofeedback utilizing Breathing-Pattern-2. Image pixel values of 2D images after the normalization of 2D images per dataset and Gaussian filter per image were used to extract tumor motion using image pixel values. The tumor motion consistency of the superior-inferior (SI) direction was evaluated in terms of an average tumor motion range and period. Results: Audiovisual biofeedback improved tumor motion consistency by 60% (p value = 0.019) from 1.0±0.6 mm (FB) to 0.4±0.4 mm (AV) in SI motion range, and by 86% (p value < 0.001) from 0.7±0.6 s (FB) to 0.1±0.2 s (AV) in period. Conclusion: This study demonstrated that audiovisual biofeedback improves both breathing pattern and tumor motion consistency for lung cancer patients. These results suggest that AV biofeedback has the potential for facilitating reproducible tumor motion towards achieving more accurate medical imaging and radiation therapy procedures

  3. SU-E-J-29: Audiovisual Biofeedback Improves Tumor Motion Consistency for Lung Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Lee, D; Pollock, S; Makhija, K; Keall, P [The University of Sydney, Camperdown, NSW (Australia); Greer, P [The University of Newcastle, Newcastle, NSW (Australia); Calvary Mater Newcastle Hospital, Newcastle, NSW (Australia); Arm, J; Hunter, P [Calvary Mater Newcastle Hospital, Newcastle, NSW (Australia); Kim, T [The University of Sydney, Camperdown, NSW (Australia); University of Virginia Health System, Charlottesville, VA (United States)

    2014-06-01

    Purpose: To investigate whether the breathing-guidance system: audiovisual (AV) biofeedback improves tumor motion consistency for lung cancer patients. This will minimize respiratory-induced tumor motion variations across cancer imaging and radiotherapy procedues. This is the first study to investigate the impact of respiratory guidance on tumor motion. Methods: Tumor motion consistency was investigated with five lung cancer patients (age: 55 to 64), who underwent a training session to get familiarized with AV biofeedback, followed by two MRI sessions across different dates (pre and mid treatment). During the training session in a CT room, two patient specific breathing patterns were obtained before (Breathing-Pattern-1) and after (Breathing-Pattern-2) training with AV biofeedback. In each MRI session, four MRI scans were performed to obtain 2D coronal and sagittal image datasets in free breathing (FB), and with AV biofeedback utilizing Breathing-Pattern-2. Image pixel values of 2D images after the normalization of 2D images per dataset and Gaussian filter per image were used to extract tumor motion using image pixel values. The tumor motion consistency of the superior-inferior (SI) direction was evaluated in terms of an average tumor motion range and period. Results: Audiovisual biofeedback improved tumor motion consistency by 60% (p value = 0.019) from 1.0±0.6 mm (FB) to 0.4±0.4 mm (AV) in SI motion range, and by 86% (p value < 0.001) from 0.7±0.6 s (FB) to 0.1±0.2 s (AV) in period. Conclusion: This study demonstrated that audiovisual biofeedback improves both breathing pattern and tumor motion consistency for lung cancer patients. These results suggest that AV biofeedback has the potential for facilitating reproducible tumor motion towards achieving more accurate medical imaging and radiation therapy procedures.

  4. Audiovisual Interaction

    DEFF Research Database (Denmark)

    Karandreas, Theodoros-Alexandros

    in a manner that allowed the subjective audiovisual evaluation of loudspeakers under controlled conditions. Additionally, unimodal audio and visual evaluations were used as a baseline for comparison. The same procedure was applied in the investigation of the validity of less than optimal stimuli presentations...

  5. Managed maintenance, the next step in power plant maintenance

    International Nuclear Information System (INIS)

    Butterworth, G.; Anderson, T.M.

    1984-01-01

    The Westinghouse Nuclear Services Integration Division managed maintenance services are described. Essential to the management and control of a total plant maintenance programme is the development of a comprehensive maintenance specification. During recent years Westinghouse has jointly developed total plant engineering-based maintenance specifications with a number of utilities. The process employed and the experience to date are described. To efficiently implement the maintenance programme Westinghouse has developed a computer software program specifically designed for day to day use at the power plant by maintenance personnel. This program retains an equipment maintenance history, schedules maintenance activities, issues work orders and performs a number of sophisticated analyses of the maintenance backlog and forecast, equipment failure rates, etc. The functions of this software program are described and details of Westinghouse efforts to support the utilities in reducing outage times through development of predefined outage plans for critical report maintenance activities are given. Also described is the experience gained in the training of specialized maintenance personnel, employing competency-based training techniques and equipment mock-ups, and the benefits experienced, in terms of improved quality and productivity of maintenance performed. The success experienced with these methods has caused Westinghouse to expand the use of these training techniques to the more routine skill areas of power plant maintenance. A significant reduction in the operating costs of nuclear power plants will only be brought about by a significant improvement in the quality of maintenance. Westinghouse intends to effect this change by expanding its international service capabilities and to make major investments in order to promote technological developments in the area of power plant maintenance. (author)

  6. Influences of selective adaptation on perception of audiovisual speech

    Science.gov (United States)

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  7. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Audiovisual Review

    Science.gov (United States)

    Physiology Teacher, 1976

    1976-01-01

    Lists and reviews recent audiovisual materials in areas of medical, dental, nursing and allied health, and veterinary medicine; undergraduate, and high school studies. Each is classified as to level, type of instruction, usefulness, and source of availability. Topics include respiration, renal physiology, muscle mechanics, anatomy, evolution,…

  9. Dynamic Pedagogy for Effective Training of Youths in Cell Phone Maintenance

    Science.gov (United States)

    Ogbuanya, T. C.; Jimoh, Bakare

    2015-01-01

    The study determined dynamic pedagogies for effective training of youths in cell phone maintenance. The study was conducted in Enugu State of Nigeria. Four research questions were developed while four null hypotheses formulated were tested at 0.05 level of significance. A survey research design was adopted for the study. The population for the…

  10. Verbal Self-Instructional Training: An Examination of Its Efficacy, Maintenance, and Generalisation.

    Science.gov (United States)

    Rath, Sudhakar

    1998-01-01

    Examines the differential efficacy, maintenance, and generalization effects of verbal self-instructional training on reading-disabled children. Types subjects by subculture (tribal versus nontribal) and cognitive stage (concrete versus formal operation). Finds that verbal self-instruction is effective for nontribals and children of formal…

  11. Human Resources Training Requirement on NPP Operation and Maintenance

    International Nuclear Information System (INIS)

    Nurlaila; Yuliastuti

    2009-01-01

    This paper discussed the human resources requirement on Nuclear Power Plant (NPP) operation and maintenance (O&M) phase related with the training required for O&M personnel. In addition, this paper also briefly discussed the availability of training facilities domestically include with some suggestion to develop the training facilities intended for the near future time in Indonesia. This paper was developed under the assumptions that Indonesia will build twin unit of NPP with capacity 1000 MWe for each using the turnkey contract method. The total of NPP O&M personnel were predicted about 692 peoples which consists of 42 personnel located in the head quarter and the rest 650 people work at NPP site. Up until now, Indonesia had the experience on operating and maintaining the nonnuclear power plant and several research reactors namely Kartini Reactor Yogyakarta, Triga Mark II Reactor Bandung, and GA Siwabessy Reactor Serpong. Beside that, experience on operating and maintaining the NPP in other countries would act as one of the reference to Indonesia in formulating an appropriate strategy to develop NPP human resources particularly in O&M phases. Education and training development program could be done trough the cooperation with vendor candidates. (author)

  12. Electrophysiological evidence for speech-specific audiovisual integration.

    Science.gov (United States)

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Published by Elsevier Ltd.

  13. Audiovisual quality assessment and prediction for videotelephony

    CERN Document Server

    Belmudez, Benjamin

    2015-01-01

    The work presented in this book focuses on modeling audiovisual quality as perceived by the users of IP-based solutions for video communication like videotelephony. It also extends the current framework for the parametric prediction of audiovisual call quality. The book addresses several aspects related to the quality perception of entire video calls, namely, the quality estimation of the single audio and video modalities in an interactive context, the audiovisual quality integration of these modalities and the temporal pooling of short sample-based quality scores to account for the perceptual quality impact of time-varying degradations.

  14. Speech-specificity of two audiovisual integration effects

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2010-01-01

    Seeing the talker’s articulatory mouth movements can influence the auditory speech percept both in speech identification and detection tasks. Here we show that these audiovisual integration effects also occur for sine wave speech (SWS), which is an impoverished speech signal that naïve observers...... often fail to perceive as speech. While audiovisual integration in the identification task only occurred when observers were informed of the speech-like nature of SWS, integration occurred in the detection task both for informed and naïve observers. This shows that both speech-specific and general...... mechanisms underlie audiovisual integration of speech....

  15. The design and use of proficiency based BWR reactor maintenance and refuelling training mockups

    International Nuclear Information System (INIS)

    Ford, G.J.

    1996-01-01

    The purpose of this paper is to describe the ABB experience with the design and use of boiling water reactor training facilities. The training programs were developed and implemented in cooperation with the nuclear utilities. ABB operates two facilities, the ABB ATOM Light Water Reactor Service Center located in Vasteras, Sweden, and the ABB Combustion Engineering Nuclear Operations BWR Training Center located in Chattanooga, Tennessee, USA. The focus of the training centers are reactor maintenance and refueling activities plus the capability to develop and qualify tools, procedures and repair techniques

  16. Virtual and augmented reality for training on maintenance; Realidad virutal y aumentada para la formacion en mantenimiento

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, F.

    2001-07-01

    This paper presents two projects focused to support training on maintenance using new technologies. Both projects aims at specifying. designing, developing, and demonstrating prototypes allowing computer guided maintenance of complex mechanical elements using Virtual Reality (VIRMAN) and Augmented Reality (STARMATE) techniques. VIRMAN project is dedicated to training course development on maintenance using Virtual Reality. It based in the animation of three dimension images for component assembly/de-assembly or equipment movements. STARMATE will rely on Augmented Reality techniques which is a growing area in virtual Reality research. The idea of Augmented Reality is to combine a real scene, viewed by the user, with a virtual scene generated by a computer augmenting the reality with additional information. (Author)

  17. 36 CFR 1237.16 - How do agencies store audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... audiovisual records? 1237.16 Section 1237.16 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.16 How do agencies store audiovisual records? Agencies must maintain appropriate storage conditions for permanent...

  18. A Catalan code of best practices for the audiovisual sector

    OpenAIRE

    Teodoro, Emma; Casanovas, Pompeu

    2010-01-01

    In spite of a new general law regarding Audiovisual Communication, the regulatory framework of the audiovisual sector in Spain can still be defined as huge, disperse and obsolete. The first part of this paper provides an overview of the major challenges of the Spanish audiovisual sector as a result of the convergence of platforms, services and operators, paying especial attention to the Audiovisual Sector in Catalonia. In the second part, we will present an example of self-regulation through...

  19. Speech cues contribute to audiovisual spatial integration.

    Directory of Open Access Journals (Sweden)

    Christopher W Bishop

    Full Text Available Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral 'what' and dorsal 'where' pathways.

  20. Reduced audiovisual recalibration in the elderly.

    Science.gov (United States)

    Chan, Yu Man; Pianta, Michael J; McKendrick, Allison M

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22-32 years old) and 15 older (64-74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  1. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson's Disease.

    Science.gov (United States)

    Ren, Yanna; Suzuki, Keisuke; Yang, Weiping; Ren, Yanling; Wu, Fengxia; Yang, Jiajia; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong; Hirata, Koichi

    2018-01-01

    The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson's disease (PD). This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC) and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p audiovisual stimuli was significantly faster than that to unimodal stimuli in both NC and PD ( p audiovisual integration was absent in PD; however, it did occur in NC. Further analysis showed that there was no significant audiovisual integration in PD with/without cognitive impairment or in PD with/without sleep disturbances. Furthermore, audiovisual facilitation was not associated with Hoehn and Yahr stage, disease duration, or the presence of sleep disturbances (all p > 0.05). The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  2. Electrophysiological assessment of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Dau, Torsten

    Speech perception integrates signal from ear and eye. This is witnessed by a wide range of audiovisual integration effects, such as ventriloquism and the McGurk illusion. Some behavioral evidence suggest that audiovisual integration of specific aspects is special for speech perception. However, our...... knowledge of such bimodal integration would be strengthened if the phenomena could be investigated by objective, neutrally based methods. One key question of the present work is if perceptual processing of audiovisual speech can be gauged with a specific signature of neurophysiological activity...... on the auditory speech percept? In two experiments, which both combine behavioral and neurophysiological measures, an uncovering of the relation between perception of faces and of audiovisual integration is attempted. Behavioral findings suggest a strong effect of face perception, whereas the MMN results are less...

  3. Multistage audiovisual integration of speech: dissociating identification and detection.

    Science.gov (United States)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S

    2011-02-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.

  4. Resist diabetes: A randomized clinical trial for resistance training maintenance in adults with prediabetes.

    Science.gov (United States)

    Davy, Brenda M; Winett, Richard A; Savla, Jyoti; Marinik, Elaina L; Baugh, Mary Elizabeth; Flack, Kyle D; Halliday, Tanya M; Kelleher, Sarah A; Winett, Sheila G; Williams, David M; Boshra, Soheir

    2017-01-01

    To determine whether a social cognitive theory (SCT)-based intervention improves resistance training (RT) maintenance and strength, and reduces prediabetes prevalence. Sedentary, overweight/obese (BMI: 25-39.9 kg/m2) adults aged 50-69 (N = 170) with prediabetes participated in the 15-month trial. Participants completed a supervised 3-month RT (2×/wk) phase and were randomly assigned (N = 159) to one of two 6-month maintenance conditions: SCT or standard care. Participants continued RT at a self-selected facility. The final 6-month period involved no contact. Assessments occurred at baseline and months 3, 9, and 15. The SCT faded-contact intervention consisted of nine tailored transition (i.e., supervised training to training alone) and nine follow-up sessions. Standard care involved six generic follow-up sessions. Primary outcomes were prevalence of normoglycemia and muscular strength. The retention rate was 76%. Four serious adverse events were reported. After 3 months of RT, 34% of participants were no longer prediabetic. This prevalence of normoglycemia was maintained through month 15 (30%), with no group difference. There was an 18% increase in the odds of being normoglycemic for each % increase in fat-free mass. Increases in muscular strength were evident at month 3 and maintained through month 15 (Pprediabetes prevalence in the SCT condition. Resistance training is an effective, maintainable strategy for reducing prediabetes prevalence and increasing muscular strength. Future research which promotes RT initiation and maintenance in clinical and community settings is warranted. ClinicalTrials.gov NCT01112709.

  5. 14 CFR 141.55 - Training course: Contents.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Training course: Contents. 141.55 Section... Training course: Contents. (a) Each training course for which approval is requested must meet the minimum... trained in the room at one time; (2) A description of each type of audiovisual aid, projector, tape...

  6. Audiovisual signs and information science: an evaluation

    Directory of Open Access Journals (Sweden)

    Jalver Bethônico

    2006-12-01

    Full Text Available This work evaluates the relationship of Information Science with audiovisual signs, pointing out conceptual limitations, difficulties imposed by the verbal fundament of knowledge, the reduced use within libraries and the ways in the direction of a more consistent analysis of the audiovisual means, supported by the semiotics of Charles Peirce.

  7. Subjective Evaluation of Audiovisual Signals

    Directory of Open Access Journals (Sweden)

    F. Fikejz

    2010-01-01

    Full Text Available This paper deals with subjective evaluation of audiovisual signals, with emphasis on the interaction between acoustic and visual quality. The subjective test is realized by a simple rating method. The audiovisual signal used in this test is a combination of images compressed by JPEG compression codec and sound samples compressed by MPEG-1 Layer III. Images and sounds have various contents. It simulates a real situation when the subject listens to compressed music and watches compressed pictures without the access to original, i.e. uncompressed signals.

  8. [Accommodation effects of the audiovisual stimulation in the patients experiencing eyestrain with the concomitant disturbances of psychological adaptation].

    Science.gov (United States)

    Shakula, A V; Emel'ianov, G A

    2014-01-01

    The present study was designed to evaluate the effectiveness of audiovisual stimulation on the state of the eye accommodation system in the patients experiencing eyes train with the concomitant disturbances of psychological. It was shown that a course of audiovisual stimulation (seeing a psychorelaxing film accompanied by a proper music) results in positive (5.9-21.9%) dynamics of the objective accommodation parameters and of the subjective status (4.5-33.2%). Taken together, these findings whole allow this method to be regarded as "relaxing preparation" in the integral complex of the measures for the preservation of the professional vision in this group of the patients.

  9. The Fungible Audio-Visual Mapping and its Experience

    Directory of Open Access Journals (Sweden)

    Adriana Sa

    2014-12-01

    Full Text Available This article draws a perceptual approach to audio-visual mapping. Clearly perceivable cause and effect relationships can be problematic if one desires the audience to experience the music. Indeed perception would bias those sonic qualities that fit previous concepts of causation, subordinating other sonic qualities, which may form the relations between the sounds themselves. The question is, how can an audio-visual mapping produce a sense of causation, and simultaneously confound the actual cause-effect relationships. We call this a fungible audio-visual mapping. Our aim here is to glean its constitution and aspect. We will report a study, which draws upon methods from experimental psychology to inform audio-visual instrument design and composition. The participants are shown several audio-visual mapping prototypes, after which we pose quantitative and qualitative questions regarding their sense of causation, and their sense of understanding the cause-effect relationships. The study shows that a fungible mapping requires both synchronized and seemingly non-related components – sufficient complexity to be confusing. As the specific cause-effect concepts remain inconclusive, the sense of causation embraces the whole. 

  10. Parametric packet-based audiovisual quality model for IPTV services

    CERN Document Server

    Garcia, Marie-Neige

    2014-01-01

    This volume presents a parametric packet-based audiovisual quality model for Internet Protocol TeleVision (IPTV) services. The model is composed of three quality modules for the respective audio, video and audiovisual components. The audio and video quality modules take as input a parametric description of the audiovisual processing path, and deliver an estimate of the audio and video quality. These outputs are sent to the audiovisual quality module which provides an estimate of the audiovisual quality. Estimates of perceived quality are typically used both in the network planning phase and as part of the quality monitoring. The same audio quality model is used for both these phases, while two variants of the video quality model have been developed for addressing the two application scenarios. The addressed packetization scheme is MPEG2 Transport Stream over Real-time Transport Protocol over Internet Protocol. In the case of quality monitoring, that is the case for which the network is already set-up, the aud...

  11. Training methods, tools and aids

    International Nuclear Information System (INIS)

    Martin, H.D.

    1980-01-01

    The training programme, training methods, tools and aids necessary for staffing nuclear power plants depend very much on the overall contractual provisions. The basis for training programmes and methods is the definition of the plant organization and the prequalification of the personnel. Preselection tests are tailored to the different educational levels and precede the training programme, where emphasis is put on practical on-the-job training. Technical basic and introductory courses follow language training and give a broad but basic spectrum of power plant technology. Plant-related theoretical training consists of reactor technology training combined with practical work in laboratories, on a test reactor and of the nuclear power plant course on design philosophy and operation. Classroom instruction together with video tapes and other audiovisual material which are used during this phase are described; as well as the various special courses for the different specialists. The first step of on-the-job training is a practical observation phase in an operating nuclear power plant, where the participants are assigned to shift work or to the different special departments, depending on their future assignment. Training in manufacturers' workshops, in laboratories or in engineering departments necessitate other training methods. The simulator training for operating personnel, for key personnel and, to some extent, also for maintenance personnel and specialists gives the practical feeling for nuclear power plant behaviour during normal and abnormal conditions. During the commissioning phase of the own nuclear power plant, which is the most important practical training, the participants are integrated into the commissioning staff and are assisted during their process of practical learning on-the-job by special instructors. Personnel training also includes performance of training of instructors and assistance in building up special training programmes and material as well

  12. Audiovisual interpretative skills: between textual culture and formalized literacy

    Directory of Open Access Journals (Sweden)

    Estefanía Jiménez, Ph. D.

    2010-01-01

    Full Text Available This paper presents the results of a study on the process of acquiring interpretative skills to decode audiovisual texts among adolescents and youth. Based on the conception of such competence as the ability to understand the meanings connoted beneath the literal discourses of audiovisual texts, this study compared two variables: the acquisition of such skills from the personal and social experience in the consumption of audiovisual products (which is affected by age difference, and, on the second hand, the differences marked by the existence of formalized processes of media literacy. Based on focus groups of young students, the research assesses the existing academic debate about these processes of acquiring skills to interpret audiovisual materials.

  13. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  14. Quality models for audiovisual streaming

    Science.gov (United States)

    Thang, Truong Cong; Kim, Young Suk; Kim, Cheon Seog; Ro, Yong Man

    2006-01-01

    Quality is an essential factor in multimedia communication, especially in compression and adaptation. Quality metrics can be divided into three categories: within-modality quality, cross-modality quality, and multi-modality quality. Most research has so far focused on within-modality quality. Moreover, quality is normally just considered from the perceptual perspective. In practice, content may be drastically adapted, even converted to another modality. In this case, we should consider the quality from semantic perspective as well. In this work, we investigate the multi-modality quality from the semantic perspective. To model the semantic quality, we apply the concept of "conceptual graph", which consists of semantic nodes and relations between the nodes. As an typical of multi-modality example, we focus on audiovisual streaming service. Specifically, we evaluate the amount of information conveyed by a audiovisual content where both video and audio channels may be strongly degraded, even audio are converted to text. In the experiments, we also consider the perceptual quality model of audiovisual content, so as to see the difference with semantic quality model.

  15. Audiovisual Archive Exploitation in the Networked Information Society

    NARCIS (Netherlands)

    Ordelman, Roeland J.F.

    2011-01-01

    Safeguarding the massive body of audiovisual content, including rich music collections, in audiovisual archives and enabling access for various types of user groups is a prerequisite for unlocking the social-economic value of these collections. Data quantities and the need for specific content

  16. Resist diabetes: A randomized clinical trial for resistance training maintenance in adults with prediabetes.

    Directory of Open Access Journals (Sweden)

    Brenda M Davy

    Full Text Available To determine whether a social cognitive theory (SCT-based intervention improves resistance training (RT maintenance and strength, and reduces prediabetes prevalence.Sedentary, overweight/obese (BMI: 25-39.9 kg/m2 adults aged 50-69 (N = 170 with prediabetes participated in the 15-month trial. Participants completed a supervised 3-month RT (2×/wk phase and were randomly assigned (N = 159 to one of two 6-month maintenance conditions: SCT or standard care. Participants continued RT at a self-selected facility. The final 6-month period involved no contact. Assessments occurred at baseline and months 3, 9, and 15. The SCT faded-contact intervention consisted of nine tailored transition (i.e., supervised training to training alone and nine follow-up sessions. Standard care involved six generic follow-up sessions. Primary outcomes were prevalence of normoglycemia and muscular strength.The retention rate was 76%. Four serious adverse events were reported. After 3 months of RT, 34% of participants were no longer prediabetic. This prevalence of normoglycemia was maintained through month 15 (30%, with no group difference. There was an 18% increase in the odds of being normoglycemic for each % increase in fat-free mass. Increases in muscular strength were evident at month 3 and maintained through month 15 (P<0.001, which represented improvements of 21% and 14% for chest and leg press, respectively. Results did not demonstrate a greater reduction in prediabetes prevalence in the SCT condition.Resistance training is an effective, maintainable strategy for reducing prediabetes prevalence and increasing muscular strength. Future research which promotes RT initiation and maintenance in clinical and community settings is warranted.ClinicalTrials.gov NCT01112709.

  17. Status and problem for Nuclear Power Plant Maintenance Training Center

    International Nuclear Information System (INIS)

    Nanjoh, Takuo

    1991-01-01

    The Nuclear Power Plant Maintenance Training Center of Kansai Electric Power Co., Inc. was founded in October, 1983, and seven years elapsed since then. The education and training of 37,000 persons were carried out to meet the situation in the plants and to enhance the facilities. Though the main policy of the practical training for preventing the recurrence of troubles does not change, the situation changed from the time of the foundation, and the role has expanded, including PA activities. The see-through plant model installed for technical education in April, 1989 is the about 1/25 scale model of the actual machine with two loops, which actually generates steam and slight electric power, and is useful for promoting the understanding of nuclear power generation theory. It accomplishes the important role that the visitors to the Center (7500 persons in 1989 fiscal year) understand the mechanism of nuclear power generation. In 1990, the education curriculum, the method of education, the time of education and so on are reviewed, aiming at the improvement of education. The execution of education and training, the training of practical techniques, the reflection of the examples of troubles to education, and the expansion of facilities are reported. (K.I.)

  18. The role of emotion in dynamic audiovisual integration of faces and voices.

    Science.gov (United States)

    Kokinous, Jenny; Kotz, Sonja A; Tavano, Alessandro; Schröger, Erich

    2015-05-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  19. Audiovisual Integration in High Functioning Adults with Autism

    Science.gov (United States)

    Keane, Brian P.; Rosenthal, Orna; Chun, Nicole H.; Shams, Ladan

    2010-01-01

    Autism involves various perceptual benefits and deficits, but it is unclear if the disorder also involves anomalous audiovisual integration. To address this issue, we compared the performance of high-functioning adults with autism and matched controls on experiments investigating the audiovisual integration of speech, spatiotemporal relations, and…

  20. Decision-level fusion for audio-visual laughter detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, M.; Truong, K.; Poppe, R.; Pantic, M.

    2008-01-01

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laughter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio-visual laughter detection is

  1. Multistage audiovisual integration of speech: dissociating identification and detection

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech...... signal. Here we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers...... informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multi-stage account of audiovisual integration of speech in which the many attributes...

  2. Use of Audiovisual Texts in University Education Process

    Science.gov (United States)

    Aleksandrov, Evgeniy P.

    2014-01-01

    Audio-visual learning technologies offer great opportunities in the development of students' analytical and projective abilities. These technologies can be used in classroom activities and for homework. This article discusses the features of audiovisual media texts use in a series of social sciences and humanities in the University curriculum.

  3. Perceived synchrony for realistic and dynamic audiovisual events.

    Science.gov (United States)

    Eg, Ragnhild; Behne, Dawn M

    2015-01-01

    In well-controlled laboratory experiments, researchers have found that humans can perceive delays between auditory and visual signals as short as 20 ms. Conversely, other experiments have shown that humans can tolerate audiovisual asynchrony that exceeds 200 ms. This seeming contradiction in human temporal sensitivity can be attributed to a number of factors such as experimental approaches and precedence of the asynchronous signals, along with the nature, duration, location, complexity and repetitiveness of the audiovisual stimuli, and even individual differences. In order to better understand how temporal integration of audiovisual events occurs in the real world, we need to close the gap between the experimental setting and the complex setting of everyday life. With this work, we aimed to contribute one brick to the bridge that will close this gap. We compared perceived synchrony for long-running and eventful audiovisual sequences to shorter sequences that contain a single audiovisual event, for three types of content: action, music, and speech. The resulting windows of temporal integration showed that participants were better at detecting asynchrony for the longer stimuli, possibly because the long-running sequences contain multiple corresponding events that offer audiovisual timing cues. Moreover, the points of subjective simultaneity differ between content types, suggesting that the nature of a visual scene could influence the temporal perception of events. An expected outcome from this type of experiment was the rich variation among participants' distributions and the derived points of subjective simultaneity. Hence, the designs of similar experiments call for more participants than traditional psychophysical studies. Heeding this caution, we conclude that existing theories on multisensory perception are ready to be tested on more natural and representative stimuli.

  4. Age-related audiovisual interactions in the superior colliculus of the rat.

    Science.gov (United States)

    Costa, M; Piché, M; Lepore, F; Guillemot, J-P

    2016-04-21

    It is well established that multisensory integration is a functional characteristic of the superior colliculus that disambiguates external stimuli and therefore reduces the reaction times toward simple audiovisual targets in space. However, in a condition where a complex audiovisual stimulus is used, such as the optical flow in the presence of modulated audio signals, little is known about the processing of the multisensory integration in the superior colliculus. Furthermore, since visual and auditory deficits constitute hallmark signs during aging, we sought to gain some insight on whether audiovisual processes in the superior colliculus are altered with age. Extracellular single-unit recordings were conducted in the superior colliculus of anesthetized Sprague-Dawley adult (10-12 months) and aged (21-22 months) rats. Looming circular concentric sinusoidal (CCS) gratings were presented alone and in the presence of sinusoidally amplitude modulated white noise. In both groups of rats, two different audiovisual response interactions were encountered in the spatial domain: superadditive, and suppressive. In contrast, additive audiovisual interactions were found only in adult rats. Hence, superior colliculus audiovisual interactions were more numerous in adult rats (38%) than in aged rats (8%). These results suggest that intersensory interactions in the superior colliculus play an essential role in space processing toward audiovisual moving objects during self-motion. Moreover, aging has a deleterious effect on complex audiovisual interactions. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. [Effects of real-time audiovisual feedback on secondary-school students' performance of chest compressions].

    Science.gov (United States)

    Abelairas-Gómez, Cristian; Rodríguez-Núñez, Antonio; Vilas-Pintos, Elisardo; Prieto Saborit, José Antonio; Barcala-Furelos, Roberto

    2015-06-01

    To describe the quality of chest compressions performed by secondary-school students trained with a realtime audiovisual feedback system. The learners were 167 students aged 12 to 15 years who had no prior experience with cardiopulmonary resuscitation (CPR). They received an hour of instruction in CPR theory and practice and then took a 2-minute test, performing hands-only CPR on a child mannequin (Prestan Professional Child Manikin). Lights built into the mannequin gave learners feedback about how many compressions they had achieved and clicking sounds told them when compressions were deep enough. All the learners were able to maintain a steady enough rhythm of compressions and reached at least 80% of the targeted compression depth. Fewer correct compressions were done in the second minute than in the first (P=.016). Real-time audiovisual feedback helps schoolchildren aged 12 to 15 years to achieve quality chest compressions on a mannequin.

  6. Knowledge Generated by Audiovisual Narrative Action Research Loops

    Science.gov (United States)

    Bautista Garcia-Vera, Antonio

    2012-01-01

    We present data collected from the research project funded by the Ministry of Education and Science of Spain entitled "Audiovisual Narratives and Intercultural Relations in Education." One of the aims of the research was to determine the nature of thought processes occurring during audiovisual narratives. We studied the possibility of…

  7. Audiovisual Association Learning in the Absence of Primary Visual Cortex.

    Science.gov (United States)

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice

    2015-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.

  8. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  9. Gestión documental de la información audiovisual deportiva en las televisiones generalistas Documentary management of the sport audio-visual information in the generalist televisions

    Directory of Open Access Journals (Sweden)

    Jorge Caldera Serrano

    2005-01-01

    Full Text Available Se analiza la gestión de la información audiovisual deportiva en el marco de los Sistemas de Información Documental de las cadenas estatales, zonales y locales. Para ello se realiza un realiza un recorrido por la cadena documental que realiza la información audiovisual deportiva con el fin de ir analizando cada uno de los parámetros, mostrando así una serie de recomendaciones y normativas para la confección del registro audiovisual deportivo. Evidentemente la documentación deportiva audiovisual no se diferencia en exceso del análisis de otros tipos documentales televisivos por lo que se lleva a cabo una profundización yampliación de su gestión y difusión, mostrando el flujo informacional dentro del Sistema.The management of the sport audio-visual documentation of the Information Systems of the state, zonal and local chains is analyzed within the framework. For it it is made makes a route by the documentary chain that makes the sport audio-visual information with the purpose of being analyzing each one of the parameters, showing therefore a series of recommendations and norms for the preparation of the sport audio-visual registry. Evidently the audio-visual sport documentation difference in excess of the analysis of other televising documentary types reason why is not carried out a deepening and extension of its management and diffusion, showing the informational flow within the System.

  10. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    Directory of Open Access Journals (Sweden)

    David Alais

    2010-06-01

    Full Text Available An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question.Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ. Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones was slightly weaker than visual learning (lateralised grating patches. Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes.The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order

  11. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    Science.gov (United States)

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be

  12. Challenges and opportunities for audiovisual diversity in the Internet

    Directory of Open Access Journals (Sweden)

    Trinidad García Leiva

    2017-06-01

    Full Text Available http://dx.doi.org/10.5007/2175-7984.2017v16n35p132 At the gates of the first quarter of the XXI century, nobody doubts the fact that the value chain of the audiovisual industry has suffered important transformations. The digital era presents opportunities for cultural enrichment as well as displays new challenges. After presenting a general portray of the audiovisual industries in the digital era, taking as a point of departure the Spanish case and paying attention to players and logics in tension, this paper will present some notes about the advantages and disadvantages that exist for the diversity of audiovisual production, distribution and consumption online. It is here sustained that the diversity of the audiovisual sector online is not guaranteed because the formula that has made some players successful and powerful is based on walled-garden models to monetize contents (which, besides, add restrictions to their reproduction and circulation by and among consumers. The final objective is to present some ideas about the elements that prevent the strengthening of the diversity of the audiovisual industry in the digital scenario. Barriers to overcome are classified as technological, financial, social, legal and political.

  13. An Instrumented Glove for Control Audiovisual Elements in Performing Arts

    Directory of Open Access Journals (Sweden)

    Rafael Tavares

    2018-02-01

    Full Text Available The use of cutting-edge technologies such as wearable devices to control reactive audiovisual systems are rarely applied in more conventional stage performances, such as opera performances. This work reports a cross-disciplinary approach for the research and development of the WMTSensorGlove, a data-glove used in an opera performance to control audiovisual elements on stage through gestural movements. A system architecture of the interaction between the wireless wearable device and the different audiovisual systems is presented, taking advantage of the Open Sound Control (OSC protocol. The developed wearable system was used as audiovisual controller in “As sete mulheres de Jeremias Epicentro”, a portuguese opera by Quarteto Contratempus, which was premiered in September 2017.

  14. Quality assurance/quality control training for plant operation and maintenance

    International Nuclear Information System (INIS)

    Bergbauer, A.K.

    1986-01-01

    One of the most important tasks during the period of plant operation is to ensure the effectiveness of the information links inside the utility and the nuclear industry. To make use of all information, experience and knowledge as well as to make sure that instructions are followed, it is necessary to provide rules, instructions and training for all people involved. QA/QC-training for plant operation and maintenance must deliver a consciousness of men in a way that e.g. instructions or procedures are followed strictly, the management is informed about deviations and mistakes, alterations are carried out with approval only, safety systems are kept integer all the time and interfaces are linked together properly. By means of examples about staff organization, control room and shift rules, work permit procedures and use of an information feedback system QA measures shall be demonstrated. (orig.)

  15. Audiovisual integration in speech perception: a multi-stage process

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    investigate whether the integration of auditory and visual speech observed in these two audiovisual integration effects are specific traits of speech perception. We further ask whether audiovisual integration is undertaken in a single processing stage or multiple processing stages....

  16. Active Drumming Experience Increases Infants' Sensitivity to Audiovisual Synchrony during Observed Drumming Actions.

    Science.gov (United States)

    Gerson, Sarah A; Schiavio, Andrea; Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition.

  17. CPR performance in the presence of audiovisual feedback or football shoulder pads.

    Science.gov (United States)

    Tanaka, Shota; Rodrigues, Wayne; Sotir, Susan; Sagisaka, Ryo; Tanaka, Hideharu

    2017-01-01

    The initiation of cardiopulmonary resuscitation (CPR) can be complicated by the use of protective equipment in contact sports, and the rate of success in resuscitating the patient depends on the time from incident to start of CPR. The aim of our study was to see if (1) previous training, (2) the presence of audiovisual feedback and (3) the presence of football shoulder pads (FSP) affected the quality of chest compressions. Six basic life support certified athletic training students (BLS-ATS), six basic life support certified emergency medical service personnel (BLS-EMS) and six advanced cardiac life support certified emergency medical service personnel (ACLS-EMS) participated in a crossover manikin study. A quasi-experimental repeated measures design was used to measure the chest compression depth (cm), rate (cpm), depth accuracy (%) and rate accuracy (%) on four different conditions by using feedback and/or FSP. Real CPR Help manufactured by ZOLL (Chelmsford, Massachusetts, USA) was used for the audiovisual feedback. Three participants from each group performed 2 min of chest compressions at baseline first, followed by compressions with FSP, with feedback and with both FSP and feedback (FSP+feedback). The other three participants from each group performed compressions at baseline first, followed by compressions with FSP+feedback, feedback and FSP. CPR performance did not differ between the groups at baseline (median (IQR), BLS-ATS: 5.0 (4.4-6.1) cm, 114(96-131) cpm; BLS-EMS: 5.4 (4.1-6.4) cm, 112(99-131) cpm; ACLS-EMS: 6.4 (5.7-6.7) cm, 138(113-140) cpm; depth p=0.10, rate p=0.37). A statistically significant difference in the percentage of depth accuracy was found with feedback (median (IQR), 13.8 (0.9-49.2)% vs 69.6 (32.3-85.8)%; p=0.0002). The rate accuracy was changed from 17.1 (0-80.7)% without feedback to 59.2 (17.3-74.3)% with feedback (p=0.50). The use of feedback was effective for depth accuracy, especially in the BLS-ATS group, regardless of the

  18. Elevated audiovisual temporal interaction in patients with migraine without aura

    Science.gov (United States)

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  19. Audio-Visual and Autogenic Relaxation Alter Amplitude of Alpha EEG Band, Causing Improvements in Mental Work Performance in Athletes.

    Science.gov (United States)

    Mikicin, Mirosław; Kowalczyk, Marek

    2015-09-01

    The aim of the present study was to investigate the effect of regular audio-visual relaxation combined with Schultz's autogenic training on: (1) the results of behavioral tests that evaluate work performance during burdensome cognitive tasks (Kraepelin test), (2) changes in classical EEG alpha frequency band, neocortex (frontal, temporal, occipital, parietal), hemisphere (left, right) versus condition (only relaxation 7-12 Hz). Both experimental (EG) and age-and skill-matched control group (CG) consisted of eighteen athletes (ten males and eight females). After 7-month training EG demonstrated changes in the amplitude of mean electrical activity of the EEG alpha bend at rest and an improvement was significantly changing and an improvement in almost all components of Kraepelin test. The same examined variables in CG were unchanged following the period without the intervention. Summing up, combining audio-visual relaxation with autogenic training significantly improves athlete's ability to perform a prolonged mental effort. These changes are accompanied by greater amplitude of waves in alpha band in the state of relax. The results suggest usefulness of relaxation techniques during performance of mentally difficult sports tasks (sports based on speed and stamina, sports games, combat sports) and during relax of athletes.

  20. Attenuated audiovisual integration in middle-aged adults in a discrimination task.

    Science.gov (United States)

    Yang, Weiping; Ren, Yanna

    2018-02-01

    Numerous studies have focused on the diversity of audiovisual integration between younger and older adults. However, consecutive trends in audiovisual integration throughout life are still unclear. In the present study, to clarify audiovisual integration characteristics in middle-aged adults, we instructed younger and middle-aged adults to conduct an auditory/visual stimuli discrimination experiment. Randomized streams of unimodal auditory (A), unimodal visual (V) or audiovisual stimuli were presented on the left or right hemispace of the central fixation point, and subjects were instructed to respond to the target stimuli rapidly and accurately. Our results demonstrated that the responses of middle-aged adults to all unimodal and bimodal stimuli were significantly slower than those of younger adults (p Audiovisual integration was markedly delayed (onset time 360 ms) and weaker (peak 3.97%) in middle-aged adults than in younger adults (onset time 260 ms, peak 11.86%). The results suggested that audiovisual integration was attenuated in middle-aged adults and further confirmed age-related decline in information processing.

  1. Audiovisual spoken word training can promote or impede auditory-only perceptual learning: prelingually deafened adults with late-acquired cochlear implants versus normal hearing adults.

    Science.gov (United States)

    Bernstein, Lynne E; Eberhardt, Silvio P; Auer, Edward T

    2014-01-01

    Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We

  2. UH-1 Helicopter Mechanic (MOS 67N20) Job Description Survey: Background, Training, and General Maintenance Activities.

    Science.gov (United States)

    Schulz, Russel E.; And Others

    The report, the first of two documents examining the relationship among job requirements, training, and manpower considerations for Army aviation maintenance Personnel, discusses the development of task data gathering techniques and procedures for incorporating this data into training programs for the UH-1 helicopter mechanic sPecialty (MOS…

  3. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Documentary management of the sport audio-visual information in the generalist televisions

    OpenAIRE

    Jorge Caldera Serrano; Felipe Alonso

    2007-01-01

    The management of the sport audio-visual documentation of the Information Systems of the state, zonal and local chains is analyzed within the framework. For it it is made makes a route by the documentary chain that makes the sport audio-visual information with the purpose of being analyzing each one of the parameters, showing therefore a series of recommendations and norms for the preparation of the sport audio-visual registry. Evidently the audio-visual sport documentation difference i...

  5. Rapid, generalized adaptation to asynchronous audiovisual speech.

    Science.gov (United States)

    Van der Burg, Erik; Goodbourn, Patrick T

    2015-04-07

    The brain is adaptive. The speed of propagation through air, and of low-level sensory processing, differs markedly between auditory and visual stimuli; yet the brain can adapt to compensate for the resulting cross-modal delays. Studies investigating temporal recalibration to audiovisual speech have used prolonged adaptation procedures, suggesting that adaptation is sluggish. Here, we show that adaptation to asynchronous audiovisual speech occurs rapidly. Participants viewed a brief clip of an actor pronouncing a single syllable. The voice was either advanced or delayed relative to the corresponding lip movements, and participants were asked to make a synchrony judgement. Although we did not use an explicit adaptation procedure, we demonstrate rapid recalibration based on a single audiovisual event. We find that the point of subjective simultaneity on each trial is highly contingent upon the modality order of the preceding trial. We find compelling evidence that rapid recalibration generalizes across different stimuli, and different actors. Finally, we demonstrate that rapid recalibration occurs even when auditory and visual events clearly belong to different actors. These results suggest that rapid temporal recalibration to audiovisual speech is primarily mediated by basic temporal factors, rather than higher-order factors such as perceived simultaneity and source identity. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  6. Narrativa audiovisual. Estrategias y recursos [Reseña

    OpenAIRE

    Cuenca Jaramillo, María Dolores

    2011-01-01

    Reseña del libro "Narrativa audiovisual. Estrategias y recursos" de Fernando Canet y Josep Prósper. Cuenca Jaramillo, MD. (2011). Narrativa audiovisual. Estrategias y recursos [Reseña]. Vivat Academia. Revista de Comunicación. Año XIV(117):125-130. http://hdl.handle.net/10251/46210 Senia 125 130 Año XIV 117

  7. Two Innovative Steps for Training on Maintenance: 'VIRMAN' Spanish Project based on Virtual Reality 'STARMATE' European Project based on Augmented Reality

    International Nuclear Information System (INIS)

    Gonzalez Anez, Francisco

    2002-01-01

    This paper presents two development projects (STARMATE and VIRMAN) focused on supporting training on maintenance. Both projects aim at specifying, designing, developing, and demonstrating prototypes allowing computer guided maintenance of complex mechanical elements using Augmented and Virtual Reality techniques. VIRMAN is a Spanish development project. The objective is to create a computer tool for maintenance training course elaborations and training delivery based on 3D virtual reality models of complex components. The training delivery includes 3D record displays on maintenance procedures with all complementary information for intervention understanding. Users are requested to perform the maintenance intervention trying to follow up the procedure. Users can be evaluated about the level of knowledge achieved. Instructors can check the evaluation records left during the training sessions. VIRMAN is simple software supported by a regular computer and can be used in an Internet framework. STARMATE is a forward step in the area of virtual reality. STARMATE is a European Commission project in the frame of 'Information Societies Technologies'. A consortium of five companies and one research institute shares their expertise in this new technology. STARMATE provides two main functionalities (1) user assistance for achieving assembly/de-assembly and following maintenance procedures, and (2) workforce training. The project relies on Augmented Reality techniques, which is a growing area in Virtual Reality research. The idea of Augmented Reality is to combine a real scene, viewed by the user, with a virtual scene, generated by a computer, augmenting the reality with additional information. The user interface is see-through goggles, headphones, microphone and an optical tracking system. All these devices are integrated in a helmet connected with two regular computers. The user has his hands free for performing the maintenance intervention and he can navigate in the virtual

  8. The role of visual spatial attention in audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias; Tiippana, K.; Laarni, J.

    2009-01-01

    Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive b......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre...... from each of the faces and from the voice on the auditory speech percept. We found that directing visual spatial attention towards a face increased the influence of that face on auditory perception. However, the influence of the voice on auditory perception did not change suggesting that audiovisual...... integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration...

  9. Media Aid Beyond the Factual: Culture, Development, and Audiovisual Assistance

    Directory of Open Access Journals (Sweden)

    Benjamin A. J. Pearson

    2015-01-01

    Full Text Available This paper discusses audiovisual assistance, a form of development aid that focuses on the production and distribution of cultural and entertainment media such as fictional films and TV shows. While the first audiovisual assistance program dates back to UNESCO’s International Fund for the Promotion of Culture in the 1970s, the past two decades have seen a proliferation of audiovisual assistance that, I argue, is related to a growing concern for culture in post-2015 global development agendas. In this paper, I examine the aims and motivations behind the EU’s audiovisual assistance programs to countries in the Global South, using data from policy documents and semi-structured, in-depth interviews with Program Managers and administrative staff in Brussels. These programs prioritize forms of audiovisual content that are locally specific, yet globally tradable. Furthermore, I argue that they have an ambivalent relationship with traditional notions of international development, one that conceptualizes media not only as a means to achieve economic development and human rights aims, but as a form of development itself.

  10. Gestión documental de la información audiovisual deportiva en las televisiones generalistas

    Documentary management of the sport audio-visual information in the generalist televisions

    OpenAIRE

    Jorge Caldera Serrano; Felipe Zapico Alonso

    2005-01-01

    Se analiza la gestión de la información audiovisual deportiva en el marco de los Sistemas de Información Documental de las cadenas estatales, zonales y locales. Para ello se realiza un realiza un recorrido por la cadena documental que realiza la información audiovisual deportiva con el fin de ir analizando cada uno de los parámetros, mostrando así una serie de recomendaciones y normativas para la confección del registro audiovisual deportivo. Evidentemente la documentación deportiva audiovisu...

  11. Interactive videodisc in maintenance

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Nguyen Van Nghi, B.

    1986-01-01

    After a recall of the videodisc characteristics, this paper presents its utilization by Electricite de France in the framework of training and maintenance. The SICMA (Interactive Communication System in Maintenance) developed and tested by Electricte de France is presented as also its utilization. It has been tested on the sites of Dampierre and Paluel in the cases of training and maintenance (deconnexion of drive rods of control elements); the conclusions of this experimentation are finally given. 4 refs [fr

  12. Audiovisual synchrony enhances BOLD responses in a brain network including multisensory STS while also enhancing target-detection performance for both modalities

    Science.gov (United States)

    Marchant, Jennifer L; Ruff, Christian C; Driver, Jon

    2012-01-01

    The brain seeks to combine related inputs from different senses (e.g., hearing and vision), via multisensory integration. Temporal information can indicate whether stimuli in different senses are related or not. A recent human fMRI study (Noesselt et al. [2007]: J Neurosci 27:11431–11441) used auditory and visual trains of beeps and flashes with erratic timing, manipulating whether auditory and visual trains were synchronous or unrelated in temporal pattern. A region of superior temporal sulcus (STS) showed higher BOLD signal for the synchronous condition. But this could not be related to performance, and it remained unclear if the erratic, unpredictable nature of the stimulus trains was important. Here we compared synchronous audiovisual trains to asynchronous trains, while using a behavioral task requiring detection of higher-intensity target events in either modality. We further varied whether the stimulus trains had predictable temporal pattern or not. Synchrony (versus lag) between auditory and visual trains enhanced behavioral sensitivity (d') to intensity targets in either modality, regardless of predictable versus unpredictable patterning. The analogous contrast in fMRI revealed BOLD increases in several brain areas, including the left STS region reported by Noesselt et al. [2007: J Neurosci 27:11431–11441]. The synchrony effect on BOLD here correlated with the subject-by-subject impact on performance. Predictability of temporal pattern did not affect target detection performance or STS activity, but did lead to an interaction with audiovisual synchrony for BOLD in inferior parietal cortex. PMID:21953980

  13. Trigger videos on the Web: Impact of audiovisual design

    NARCIS (Netherlands)

    Verleur, R.; Heuvelman, A.; Verhagen, Pleunes Willem

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is

  14. Segmentation of the Speaker's Face Region with Audiovisual Correlation

    Science.gov (United States)

    Liu, Yuyu; Sato, Yoichi

    The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.

  15. Read My Lips: Brain Dynamics Associated with Audiovisual Integration and Deviance Detection.

    Science.gov (United States)

    Tse, Chun-Yu; Gratton, Gabriele; Garnsey, Susan M; Novak, Michael A; Fabiani, Monica

    2015-09-01

    Information from different modalities is initially processed in different brain areas, yet real-world perception often requires the integration of multisensory signals into a single percept. An example is the McGurk effect, in which people viewing a speaker whose lip movements do not match the utterance perceive the spoken sounds incorrectly, hearing them as more similar to those signaled by the visual rather than the auditory input. This indicates that audiovisual integration is important for generating the phoneme percept. Here we asked when and where the audiovisual integration process occurs, providing spatial and temporal boundaries for the processes generating phoneme perception. Specifically, we wanted to separate audiovisual integration from other processes, such as simple deviance detection. Building on previous work employing ERPs, we used an oddball paradigm in which task-irrelevant audiovisually deviant stimuli were embedded in strings of non-deviant stimuli. We also recorded the event-related optical signal, an imaging method combining spatial and temporal resolution, to investigate the time course and neuroanatomical substrate of audiovisual integration. We found that audiovisual deviants elicit a short duration response in the middle/superior temporal gyrus, whereas audiovisual integration elicits a more extended response involving also inferior frontal and occipital regions. Interactions between audiovisual integration and deviance detection processes were observed in the posterior/superior temporal gyrus. These data suggest that dynamic interactions between inferior frontal cortex and sensory regions play a significant role in multimodal integration.

  16. Audiovisual Script Writing.

    Science.gov (United States)

    Parker, Norton S.

    In audiovisual writing the writer must first learn to think in terms of moving visual presentation. The writer must research his script, organize it, and adapt it to a limited running time. By use of a pleasant-sounding narrator and well-written narration, the visual and narrative can be successfully integrated. There are two types of script…

  17. Effect of attentional load on audiovisual speech perception: Evidence from ERPs

    Directory of Open Access Journals (Sweden)

    Agnès eAlsius

    2014-07-01

    Full Text Available Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e. a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  18. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    Science.gov (United States)

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  19. Active Drumming Experience Increases Infants' Sensitivity to Audiovisual Synchrony during Observed Drumming Actions.

    Directory of Open Access Journals (Sweden)

    Sarah A Gerson

    Full Text Available In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early music perception and cognition.

  20. Active Drumming Experience Increases Infants’ Sensitivity to Audiovisual Synchrony during Observed Drumming Actions

    Science.gov (United States)

    Timmers, Renee; Hunnius, Sabine

    2015-01-01

    In the current study, we examined the role of active experience on sensitivity to multisensory synchrony in six-month-old infants in a musical context. In the first of two experiments, we trained infants to produce a novel multimodal effect (i.e., a drum beat) and assessed the effects of this training, relative to no training, on their later perception of the synchrony between audio and visual presentation of the drumming action. In a second experiment, we then contrasted this active experience with the observation of drumming in order to test whether observation of the audiovisual effect was as effective for sensitivity to multimodal synchrony as active experience. Our results indicated that active experience provided a unique benefit above and beyond observational experience, providing insights on the embodied roots of (early) music perception and cognition. PMID:26111226

  1. Development of Sensitivity to Audiovisual Temporal Asynchrony during Midchildhood

    Science.gov (United States)

    Kaganovich, Natalya

    2016-01-01

    Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal…

  2. "Audio-visuel Integre" et Communication(s) ("Integrated Audiovisual" and Communication)

    Science.gov (United States)

    Moirand, Sophie

    1974-01-01

    This article examines the usefullness of the audiovisual method in teaching communication competence, and calls for research in audiovisual methods as well as in communication theory for improvement in these areas. (Text is in French.) (AM)

  3. The Influence of Selective and Divided Attention on Audiovisual Integration in Children.

    Science.gov (United States)

    Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong

    2016-01-24

    This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.

  4. Training Aids for Online Instruction: An Analysis.

    Science.gov (United States)

    Guy, Robin Frederick

    This paper describes a number of different types of training aids currently employed in online training: non-interactive audiovisual presentations; interactive computer-based aids; partially interactive aids based on recorded searches; print-based materials; and kits. The advantages and disadvantages of each type of aid are noted, and a table…

  5. Mujeres e industria audiovisual hoy: Involución, experimentación y nuevos modelos narrativos Women and the audiovisual (industry today: regression, experiment and new narrative models

    Directory of Open Access Journals (Sweden)

    Ana MARTÍNEZ-COLLADO MARTÍNEZ

    2011-07-01

    Full Text Available Este artículo analiza las prácticas artísticas audiovisuales en el contexto actual. Describe, en primer lugar, el proceso de involución de las prácticas audiovisuales realizadas por mujeres artistas. Las mujeres no están presentes ni como productoras, ni realizadoras, ni como ejecutivas de la industria audiovisual de tal manera que inevitablemente se reconstruyen y refuerzan los estereotipos tradicionales de género. A continuación el artículo se aproxima a la práctica artística audiovisual feminista en la década de los 70 y 80. Tomar la cámara se hizo absolutamente necesario no sólo para dar voz a muchas mujeres. Era necesario reinscribir los discursos ausentes y señalar un discurso crítico respecto a la representación cultural. Analiza, también, cómo estas prácticas a partir de la década de los 90 exploran nuevos modelos narrativos vinculados a las transformaciones de la subjetividad contemporánea, al tiempo que desarrollan su producción audiovisual en un “campo expandido” de exhibición. Por último, el artículo señala la relación de las prácticas feministas audiovisuales con el complejo territorio de la globalización y la sociedad de la información. La narración de la experiencia local ha encontrado en el audiovisual un medio privilegiado para señalar los problemas de la diferencia, la identidad, la raza y la etnicidad.This article analyses audiovisual art in the contemporary context. Firstly it describes the current regression of the role of women artists’ audiovisual practices. Women have little or no presence in the audiovisual industry as producers, filmmakers or executives, a condition that inevitably reconstitutes and reinforces traditional gender stereotypes. The article goes on to look at the feminist audiovisual practices of the nineteen seventies and eighties when women’s filmmaking became an absolutely necessity, not only to give voice to women but also to inscribe discourses found to be

  6. Practical Applications for Maintenance of Certification Products in Child and Adolescent Residency Training.

    Science.gov (United States)

    Williams, Laurel L; Sexson, Sandra; Dingle, Arden D; Young-Walker, Laine; John, Nadyah; Hunt, Jeffrey

    2016-04-01

    The authors evaluated whether Maintenance of Certification (MOC) Performance-in-Practice products in training increases trainee knowledge of MOC processes and is viewed by trainees as a useful activity. Six child and adolescent psychiatry fellowships used MOC products in continuity clinics to assess their usefulness as training tools. Two surveys assessed initial knowledge of MOC and usefulness of the activity. Forty-one fellows completed the initial survey. A majority of first-year fellows indicated lack of awareness of MOC in contrast to a majority of second-year fellows who indicated some awareness. Thirty-five fellows completed the second survey. A majority of first- and second-year fellows found the activity easy to execute and would change something about their practice as a result. Using MOC products in training appears to be a useful activity that may assist training programs in teaching the principles of self- and peer-learning.

  7. Prácticas de producción audiovisual universitaria reflejadas en los trabajos presentados en la muestra audiovisual universitaria Ventanas 2005-2009

    Directory of Open Access Journals (Sweden)

    Maria Urbanczyk

    2011-01-01

    Full Text Available Este artículo presenta los resultados de la investigación realizada sobre la producción audiovisual universitaria en Colombia, a partir de los trabajos presentados en la muestra audiovisual Ventanas 2005-2009. El estudio de los trabajos trató de abarcar de la manera más completa posible el proceso de producción audiovisual que realizan los jóvenes universitarios, desde el nacimiento de la idea hasta el producto final, la circulación y la socialización. Se encontró que los temas más recurrentes son la violencia y los sentimientos, reflejados desde distintos géneros, tratamientos estéticos y abordajes conceptuales. Ante la ausencia de investigaciones que legitimen el saber que se produce en las aulas en cuanto al campo audiovisual en Colombia, esta investigación pretende abrir un camino para evidenciar el aporte que dejan los jóvenes en la consolidación de una narrativa nacional y en la preservación de la memoria del país.

  8. 36 CFR 1237.18 - What are the environmental standards for audiovisual records storage?

    Science.gov (United States)

    2010-07-01

    ... standards for audiovisual records storage? 1237.18 Section 1237.18 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.18 What are the environmental standards for audiovisual records storage? (a...

  9. Neural Correlates of Audiovisual Integration of Semantic Category Information

    Science.gov (United States)

    Hu, Zhonghua; Zhang, Ruiling; Zhang, Qinglin; Liu, Qiang; Li, Hong

    2012-01-01

    Previous studies have found a late frontal-central audiovisual interaction during the time period about 150-220 ms post-stimulus. However, it is unclear to which process is this audiovisual interaction related: to processing of acoustical features or to classification of stimuli? To investigate this question, event-related potentials were recorded…

  10. Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    NARCIS (Netherlands)

    Nijholt, Antinus; Dijk, Esko O.; Lemmens, Paul M.C.; Luitjens, S.B.

    2010-01-01

    The intention of the symposium on Haptic and Audio-visual stimuli at the EuroHaptics 2010 conference is to deepen the understanding of the effect of combined Haptic and Audio-visual stimuli. The knowledge gained will be used to enhance experiences and interactions in daily life. To this end, a

  11. Neural Correlates of Temporal Complexity and Synchrony during Audiovisual Correspondence Detection.

    Science.gov (United States)

    Baumann, Oliver; Vromen, Joyce M G; Cheung, Allen; McFadyen, Jessica; Ren, Yudan; Guo, Christine C

    2018-01-01

    We often perceive real-life objects as multisensory cues through space and time. A key challenge for audiovisual integration is to match neural signals that not only originate from different sensory modalities but also that typically reach the observer at slightly different times. In humans, complex, unpredictable audiovisual streams lead to higher levels of perceptual coherence than predictable, rhythmic streams. In addition, perceptual coherence for complex signals seems less affected by increased asynchrony between visual and auditory modalities than for simple signals. Here, we used functional magnetic resonance imaging to determine the human neural correlates of audiovisual signals with different levels of temporal complexity and synchrony. Our study demonstrated that greater perceptual asynchrony and lower signal complexity impaired performance in an audiovisual coherence-matching task. Differences in asynchrony and complexity were also underpinned by a partially different set of brain regions. In particular, our results suggest that, while regions in the dorsolateral prefrontal cortex (DLPFC) were modulated by differences in memory load due to stimulus asynchrony, areas traditionally thought to be involved in speech production and recognition, such as the inferior frontal and superior temporal cortex, were modulated by the temporal complexity of the audiovisual signals. Our results, therefore, indicate specific processing roles for different subregions of the fronto-temporal cortex during audiovisual coherence detection.

  12. Action-outcome learning and prediction shape the window of simultaneity of audiovisual outcomes.

    Science.gov (United States)

    Desantis, Andrea; Haggard, Patrick

    2016-08-01

    To form a coherent representation of the objects around us, the brain must group the different sensory features composing these objects. Here, we investigated whether actions contribute in this grouping process. In particular, we assessed whether action-outcome learning and prediction contribute to audiovisual temporal binding. Participants were presented with two audiovisual pairs: one pair was triggered by a left action, and the other by a right action. In a later test phase, the audio and visual components of these pairs were presented at different onset times. Participants judged whether they were simultaneous or not. To assess the role of action-outcome prediction on audiovisual simultaneity, each action triggered either the same audiovisual pair as in the learning phase ('predicted' pair), or the pair that had previously been associated with the other action ('unpredicted' pair). We found the time window within which auditory and visual events appeared simultaneous increased for predicted compared to unpredicted pairs. However, no change in audiovisual simultaneity was observed when audiovisual pairs followed visual cues, rather than voluntary actions. This suggests that only action-outcome learning promotes temporal grouping of audio and visual effects. In a second experiment we observed that changes in audiovisual simultaneity do not only depend on our ability to predict what outcomes our actions generate, but also on learning the delay between the action and the multisensory outcome. When participants learned that the delay between action and audiovisual pair was variable, the window of audiovisual simultaneity for predicted pairs increased, relative to a fixed action-outcome pair delay. This suggests that participants learn action-based predictions of audiovisual outcome, and adapt their temporal perception of outcome events based on such predictions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  13. The influence of scenario-based training and real-time audiovisual feedback on out-of-hospital cardiopulmonary resuscitation quality and survival from out-of-hospital cardiac arrest.

    Science.gov (United States)

    Bobrow, Bentley J; Vadeboncoeur, Tyler F; Stolz, Uwe; Silver, Annemarie E; Tobin, John M; Crawford, Scott A; Mason, Terence K; Schirmer, Jerome; Smith, Gary A; Spaite, Daniel W

    2013-07-01

    We assess whether an initiative to optimize out-of-hospital provider cardiopulmonary resuscitation (CPR) quality is associated with improved CPR quality and increased survival from out-of-hospital cardiac arrest. This was a before-after study of consecutive adult out-of-hospital cardiac arrest. Data were obtained from out-of-hospital forms and defibrillators. Phase 1 included 18 months with real-time audiovisual feedback disabled (October 2008 to March 2010). Phase 2 included 16 months (May 2010 to September 2011) after scenario-based training of 373 professional rescuers and real-time audiovisual feedback enabled. The effect of interventions on survival to hospital discharge was assessed with multivariable logistic regression. Multiple imputation of missing data was used to analyze the effect of interventions on CPR quality. Analysis included 484 out-of-hospital cardiac arrest patients (phase 1 232; phase 2 252). Median age was 68 years (interquartile range 56-79); 66.5% were men. CPR quality measures improved significantly from phase 1 to phase 2: Mean chest compression rate decreased from 128 to 106 chest compressions per minute (difference -23 chest compressions; 95% confidence interval [CI] -26 to -19 chest compressions); mean chest compression depth increased from 1.78 to 2.15 inches (difference 0.38 inches; 95% CI 0.28 to 0.47 inches); median chest compression fraction increased from 66.2% to 83.7% (difference 17.6%; 95% CI 15.0% to 20.1%); median preshock pause decreased from 26.9 to 15.5 seconds (difference -11.4 seconds; 95% CI -15.7 to -7.2 seconds), and mean ventilation rate decreased from 11.7 to 9.5/minute (difference -2.2/minute; 95% CI -3.9 to -0.5/minute). All-rhythms survival increased from phase 1 to phase 2 (20/231, 8.7% versus 35/252, 13.9%; difference 5.2%; 95% CI -0.4% to 10.8%), with an adjusted odds ratio of 2.72 (95% CI 1.15 to 6.41), controlling for initial rhythm, witnessed arrest, age, minimally interrupted cardiac resuscitation

  14. Boosting pitch encoding with audiovisual interactions in congenital amusia.

    Science.gov (United States)

    Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne

    2015-01-01

    The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate

  15. The maintenance training center of the paks nuclear power plant - past, present and future

    International Nuclear Information System (INIS)

    Kiss, I.

    2001-01-01

    The safety of the Paks nuclear power plant (Paks NPP) is a political-economic factor with general influence on the stability of the Hungarian economy. Since the beginning of the 1990s, the plant management has been taking significant efforts to learn about the factors that define plant safety and to reveal areas where safety can be further improved. Major emphasis is also placed on the provision of resources and creation of conditions necessary for the preservation of staff competence. In 1997 a separate, maintenance-specific facility was erected. The Maintenance Training Center of the Paks NPP is a worldwide major unique center. (orig.) [de

  16. Trigger Videos on the Web: Impact of Audiovisual Design

    Science.gov (United States)

    Verleur, Ria; Heuvelman, Ard; Verhagen, Plon W.

    2011-01-01

    Audiovisual design might impact emotional responses, as studies from the 1970s and 1980s on movie and television content show. Given today's abundant presence of web-based videos, this study investigates whether audiovisual design will impact web-video content in a similar way. The study is motivated by the potential influence of video-evoked…

  17. Development and preliminary evaluation of a prototype audiovisual biofeedback device incorporating a patient-specific guiding waveform

    Energy Technology Data Exchange (ETDEWEB)

    Venkat, Raghu B; Sawant, Amit; Suh, Yelin; Keall, Paul J [Department of Radiation Oncology, Stanford University, Stanford, CA 94305-5847 (United States); George, Rohini [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA (United States)], E-mail: Paul.Keall@stanford.edu

    2008-06-07

    The aim of this research was to investigate the effectiveness of a novel audio-visual biofeedback respiratory training tool to reduce respiratory irregularity. The audiovisual biofeedback system acquires sample respiratory waveforms of a particular patient and computes a patient-specific waveform to guide the patient's subsequent breathing. Two visual feedback models with different displays and cognitive loads were investigated: a bar model and a wave model. The audio instructions were ascending/descending musical tones played at inhale and exhale respectively to assist in maintaining the breathing period. Free-breathing, bar model and wave model training was performed on ten volunteers for 5 min for three repeat sessions. A total of 90 respiratory waveforms were acquired. It was found that the bar model was superior to free breathing with overall rms displacement variations of 0.10 and 0.16 cm, respectively, and rms period variations of 0.77 and 0.33 s, respectively. The wave model was superior to the bar model and free breathing for all volunteers, with an overall rms displacement of 0.08 cm and rms periods of 0.2 s. The reduction in the displacement and period variations for the bar model compared with free breathing was statistically significant (p = 0.005 and 0.002, respectively); the wave model was significantly better than the bar model (p = 0.006 and 0.005, respectively). Audiovisual biofeedback with a patient-specific guiding waveform significantly reduces variations in breathing. The wave model approach reduces cycle-to-cycle variations in displacement by greater than 50% and variations in period by over 70% compared with free breathing. The planned application of this device is anatomic and functional imaging procedures and radiation therapy delivery. (note)

  18. Development and preliminary evaluation of a prototype audiovisual biofeedback device incorporating a patient-specific guiding waveform

    International Nuclear Information System (INIS)

    Venkat, Raghu B; Sawant, Amit; Suh, Yelin; Keall, Paul J; George, Rohini

    2008-01-01

    The aim of this research was to investigate the effectiveness of a novel audio-visual biofeedback respiratory training tool to reduce respiratory irregularity. The audiovisual biofeedback system acquires sample respiratory waveforms of a particular patient and computes a patient-specific waveform to guide the patient's subsequent breathing. Two visual feedback models with different displays and cognitive loads were investigated: a bar model and a wave model. The audio instructions were ascending/descending musical tones played at inhale and exhale respectively to assist in maintaining the breathing period. Free-breathing, bar model and wave model training was performed on ten volunteers for 5 min for three repeat sessions. A total of 90 respiratory waveforms were acquired. It was found that the bar model was superior to free breathing with overall rms displacement variations of 0.10 and 0.16 cm, respectively, and rms period variations of 0.77 and 0.33 s, respectively. The wave model was superior to the bar model and free breathing for all volunteers, with an overall rms displacement of 0.08 cm and rms periods of 0.2 s. The reduction in the displacement and period variations for the bar model compared with free breathing was statistically significant (p = 0.005 and 0.002, respectively); the wave model was significantly better than the bar model (p = 0.006 and 0.005, respectively). Audiovisual biofeedback with a patient-specific guiding waveform significantly reduces variations in breathing. The wave model approach reduces cycle-to-cycle variations in displacement by greater than 50% and variations in period by over 70% compared with free breathing. The planned application of this device is anatomic and functional imaging procedures and radiation therapy delivery. (note)

  19. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by bimodal integration in the McGurk effect. This integration effect may be specific to speech or be applied to all stimuli in general. To investigate this, Tuomainen et al. (2005) used sine-wave speech, which naïve observers may perceive as non......-speech, but hear as speech once informed of the linguistic origin of the signal. Combinations of sine-wave speech and incongruent video of the talker elicited a McGurk effect only for informed observers. This indicates that the audiovisual integration effect is specific to speech perception. However, observers...... that observers did look near the mouth. We conclude that eye-movements did not influence the results of Tuomainen et al. and that their results thus can be taken as evidence of a speech specific mode of audiovisual integration underlying the McGurk illusion....

  20. Cross-modal cueing in audiovisual spatial attention

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias

    2015-01-01

    effects have been reported for endogenous visual cues while exogenous cues seem to be mostly ineffective. In three experiments, we investigated cueing effects on the processing of audiovisual signals. In Experiment 1 we used endogenous cues to investigate their effect on the detection of auditory, visual......, and audiovisual targets presented with onset asynchrony. Consistent cueing effects were found in all target conditions. In Experiment 2 we used exogenous cues and found cueing effects only for visual target detection, but not auditory target detection. In Experiment 3 we used predictive exogenous cues to examine...

  1. Perception of Intersensory Synchrony in Audiovisual Speech: Not that Special

    Science.gov (United States)

    Vroomen, Jean; Stekelenburg, Jeroen J.

    2011-01-01

    Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired ("unity assumption"). Participants made…

  2. GÖRSEL-İŞİTSEL ÇEVİRİ / AUDIOVISUAL TRANSLATION

    Directory of Open Access Journals (Sweden)

    Sevtap GÜNAY KÖPRÜLÜ

    2016-04-01

    Full Text Available Audiovisual translation dating back to the silent film era is a special translation method which has been developed for the translation of the movies and programs shown on TV and cinema. Therefore, in the beginning, the term “film translation” was used for this type of translation. Due to the growing number of audiovisual texts it has attracted the interest of scientists and has been assessed under the translation studies. Also in our country the concept of film translation was used for this area, but recently, the concept of audio-visual has been used. Since it not only encompasses the films but also covers all the audio-visual communicatian tools as well, especially in scientific field. In this study, the aspects are analyzed which should be taken into consideration by the translator during the audio-visual translation process within the framework of source text, translated text, film, technical knowledge and knowledge. At the end of the study, it is shown that there are some factors, apart from linguistic and paralinguistic factors and they must be considered carefully as they can influence the quality of the translation. And it is also shown that the given factors require technical knowledge in translation. In this sense, the audio-visual translation is accessed from a different angle compared to the other researches which have been done.

  3. 36 CFR 1237.12 - What record elements must be created and preserved for permanent audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... created and preserved for permanent audiovisual records? 1237.12 Section 1237.12 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC... permanent audiovisual records? For permanent audiovisual records, the following record elements must be...

  4. Spatial Frequency Requirements and Gaze Strategy in Visual-Only and Audiovisual Speech Perception

    Science.gov (United States)

    Wilson, Amanda H.; Alsius, Agnès; Parè, Martin; Munhall, Kevin G.

    2016-01-01

    Purpose: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. Method: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent…

  5. From "Piracy" to Payment: Audio-Visual Copyright and Teaching Practice.

    Science.gov (United States)

    Anderson, Peter

    1993-01-01

    The changing circumstances in Australia governing the use of broadcast television and radio material in education are examined, from the uncertainty of the early 1980s to current management of copyrighted audiovisual material under the statutory licensing agreement between universities and an audiovisual copyright agency. (MSE)

  6. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    Science.gov (United States)

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  7. Skill dependent audiovisual integration in the fusiform induces repetition suppression.

    Science.gov (United States)

    McNorgan, Chris; Booth, James R

    2015-02-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Audiovisual consumption and its social logics on the web

    OpenAIRE

    Rose Marie Santini; Juan C. Calvi

    2013-01-01

    This article analyzes the social logics underlying audiovisualconsumption on digital networks. We retrieved some data on the Internet globaltraffic of audiovisual files since 2008 to identify formats, modes of distributionand consumption of audiovisual contents that tend to prevail on the Web. Thisresearch shows the types of social practices which are dominant among usersand its relation to what we designate as “Internet culture”.

  9. 36 CFR 1237.10 - How must agencies manage their audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... their audiovisual, cartographic, and related records? 1237.10 Section 1237.10 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.10 How must agencies manage their audiovisual, cartographic, and related...

  10. Audio-Visual Speech Recognition Using MPEG-4 Compliant Visual Features

    Directory of Open Access Journals (Sweden)

    Petar S. Aleksic

    2002-11-01

    Full Text Available We describe an audio-visual automatic continuous speech recognition system, which significantly improves speech recognition performance over a wide range of acoustic noise levels, as well as under clean audio conditions. The system utilizes facial animation parameters (FAPs supported by the MPEG-4 standard for the visual representation of speech. We also describe a robust and automatic algorithm we have developed to extract FAPs from visual data, which does not require hand labeling or extensive training procedures. The principal component analysis (PCA was performed on the FAPs in order to decrease the dimensionality of the visual feature vectors, and the derived projection weights were used as visual features in the audio-visual automatic speech recognition (ASR experiments. Both single-stream and multistream hidden Markov models (HMMs were used to model the ASR system, integrate audio and visual information, and perform a relatively large vocabulary (approximately 1000 words speech recognition experiments. The experiments performed use clean audio data and audio data corrupted by stationary white Gaussian noise at various SNRs. The proposed system reduces the word error rate (WER by 20% to 23% relatively to audio-only speech recognition WERs, at various SNRs (0–30 dB with additive white Gaussian noise, and by 19% relatively to audio-only speech recognition WER under clean audio conditions.

  11. Memory and learning with rapid audiovisual sequences

    Science.gov (United States)

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  12. Memory and learning with rapid audiovisual sequences.

    Science.gov (United States)

    Keller, Arielle S; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.

  13. 77 FR 22803 - Certain Audiovisual Components and Products Containing the Same; Institution of Investigation...

    Science.gov (United States)

    2012-04-17

    ... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-837] Certain Audiovisual Components and Products... importation of certain audiovisual components and products containing the same by reason of infringement of... importation, or the sale within the United States after importation of certain audiovisual components and...

  14. Audiovisual perceptual learning with multiple speakers.

    Science.gov (United States)

    Mitchel, Aaron D; Gerfen, Chip; Weiss, Daniel J

    2016-05-01

    One challenge for speech perception is between-speaker variability in the acoustic parameters of speech. For example, the same phoneme (e.g. the vowel in "cat") may have substantially different acoustic properties when produced by two different speakers and yet the listener must be able to interpret these disparate stimuli as equivalent. Perceptual tuning, the use of contextual information to adjust phonemic representations, may be one mechanism that helps listeners overcome obstacles they face due to this variability during speech perception. Here we test whether visual contextual cues to speaker identity may facilitate the formation and maintenance of distributional representations for individual speakers, allowing listeners to adjust phoneme boundaries in a speaker-specific manner. We familiarized participants to an audiovisual continuum between /aba/ and /ada/. During familiarization, the "b-face" mouthed /aba/ when an ambiguous token was played, while the "D-face" mouthed /ada/. At test, the same ambiguous token was more likely to be identified as /aba/ when paired with a stilled image of the "b-face" than with an image of the "D-face." This was not the case in the control condition when the two faces were paired equally with the ambiguous token. Together, these results suggest that listeners may form speaker-specific phonemic representations using facial identity cues.

  15. Cortical Integration of Audio-Visual Information

    Science.gov (United States)

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  16. Cortical oscillations modulated by congruent and incongruent audiovisual stimuli.

    Science.gov (United States)

    Herdman, A T; Fujioka, T; Chau, W; Ross, B; Pantev, C; Picton, T W

    2004-11-30

    Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.

  17. Audiovisual Speech Synchrony Measure: Application to Biometrics

    Directory of Open Access Journals (Sweden)

    Gérard Chollet

    2007-01-01

    Full Text Available Speech is a means of communication which is intrinsically bimodal: the audio signal originates from the dynamics of the articulators. This paper reviews recent works in the field of audiovisual speech, and more specifically techniques developed to measure the level of correspondence between audio and visual speech. It overviews the most common audio and visual speech front-end processing, transformations performed on audio, visual, or joint audiovisual feature spaces, and the actual measure of correspondence between audio and visual speech. Finally, the use of synchrony measure for biometric identity verification based on talking faces is experimented on the BANCA database.

  18. Audiovisual consumption and its social logics on the web

    Directory of Open Access Journals (Sweden)

    Rose Marie Santini

    2013-06-01

    Full Text Available This article analyzes the social logics underlying audiovisualconsumption on digital networks. We retrieved some data on the Internet globaltraffic of audiovisual files since 2008 to identify formats, modes of distributionand consumption of audiovisual contents that tend to prevail on the Web. Thisresearch shows the types of social practices which are dominant among usersand its relation to what we designate as “Internet culture”.

  19. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    DEFF Research Database (Denmark)

    Andersen, Tobias; Starrfelt, Randi

    2015-01-01

    's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical......, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing...

  20. Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis.

    Science.gov (United States)

    Altieri, Nicholas; Wenger, Michael J

    2013-01-01

    Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.

  1. Search in audiovisual broadcast archives

    NARCIS (Netherlands)

    Huurnink, B.

    2010-01-01

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage from overseas services for the evening news, or a documentary maker describing the

  2. 2010 Canadian Cardiovascular Society/Canadian Society of Echocardiography Guidelines for Training and Maintenance of Competency in Adult Echocardiography.

    Science.gov (United States)

    Burwash, Ian G; Basmadjian, Arsene; Bewick, David; Choy, Jonathan B; Cujec, Bibiana; Jassal, Davinder S; MacKenzie, Scott; Nair, Parvathy; Rudski, Lawrence G; Yu, Eric; Tam, James W

    2011-01-01

    Guidelines for the provision of echocardiography in Canada were jointly developed and published by the Canadian Cardiovascular Society and the Canadian Society of Echocardiography in 2005. Since their publication, recognition of the importance of echocardiography to patient care has increased, along with the use of focused, point-of-care echocardiography by physicians of diverse clinical backgrounds and variable training. New guidelines for physician training and maintenance of competence in adult echocardiography were required to ensure that physicians providing either focused, point-of-care echocardiography or comprehensive echocardiography are appropriately trained and proficient in their use of echocardiography. In addition, revision of the guidelines was required to address technological advances and the desire to standardize echocardiography training across the country to facilitate the national recognition of a physician's expertise in echocardiography. This paper summarizes the new Guidelines for Physician Training and Maintenance of Competency in Adult Echocardiography, which are considerably more comprehensive than earlier guidelines and address many important issues not previously covered. These guidelines provide a blueprint for physician training despite different clinical backgrounds and help standardize physician training and training programs across the country. Adherence to the guidelines will ensure that physicians providing echocardiography have acquired sufficient expertise required for their specific practice. The document will also provide a framework for other national societies to standardize their training programs in echocardiography and will provide a benchmark by which competency in adult echocardiography may be measured. Copyright © 2011 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  3. NOTE: Development and preliminary evaluation of a prototype audiovisual biofeedback device incorporating a patient-specific guiding waveform

    Science.gov (United States)

    Venkat, Raghu B.; Sawant, Amit; Suh, Yelin; George, Rohini; Keall, Paul J.

    2008-06-01

    The aim of this research was to investigate the effectiveness of a novel audio-visual biofeedback respiratory training tool to reduce respiratory irregularity. The audiovisual biofeedback system acquires sample respiratory waveforms of a particular patient and computes a patient-specific waveform to guide the patient's subsequent breathing. Two visual feedback models with different displays and cognitive loads were investigated: a bar model and a wave model. The audio instructions were ascending/descending musical tones played at inhale and exhale respectively to assist in maintaining the breathing period. Free-breathing, bar model and wave model training was performed on ten volunteers for 5 min for three repeat sessions. A total of 90 respiratory waveforms were acquired. It was found that the bar model was superior to free breathing with overall rms displacement variations of 0.10 and 0.16 cm, respectively, and rms period variations of 0.77 and 0.33 s, respectively. The wave model was superior to the bar model and free breathing for all volunteers, with an overall rms displacement of 0.08 cm and rms periods of 0.2 s. The reduction in the displacement and period variations for the bar model compared with free breathing was statistically significant (p = 0.005 and 0.002, respectively); the wave model was significantly better than the bar model (p = 0.006 and 0.005, respectively). Audiovisual biofeedback with a patient-specific guiding waveform significantly reduces variations in breathing. The wave model approach reduces cycle-to-cycle variations in displacement by greater than 50% and variations in period by over 70% compared with free breathing. The planned application of this device is anatomic and functional imaging procedures and radiation therapy delivery.

  4. Proving maintenance practices at France's CETIC facility

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    CETIC, a PWR maintenance testing, training and validation centre became operational in September 1986. It is designed to meet the following basic requirements: development of plant maintenance processes to reduce work time, validation of tools for use during maintenance, training and qualification of teams for performing high-technology, high-risk operations in nuclear power plants. (U.K.)

  5. Effects of a 14-month low-cost maintenance training program in patients with chronic systolic heart failure: a randomized study

    DEFF Research Database (Denmark)

    Prescott, Eva; Hjardem-Hansen, Rasmus; Dela, Flemming

    2009-01-01

    % CI: 3.0-13.0, P=0.002). No effect of maintenance intervention was observed for 6-min walk test, incremental shuttle walk test, sit-to-stand test, or quality of life. After 14 months changes in most markers of inflammation, endothelial damage, and glycemic control were more beneficial...... was maximum workload; secondary endpoints were 6-min walk test, incremental shuttle walk test, sit-to-stand test, quality of life, and serological markers. RESULTS: Six patients died and 43 completed the study. The initial 8-week training was associated with small but significant improvement in all...... in the intervention group. CONCLUSION: A low-cost maintenance intervention in CHF patients reduced the decline in the maximum workload compared with usual care but not in other measures of physical function. Results suggest beneficial effects of long-term maintenance training on glycemic control, inflammation...

  6. The process of developing audiovisual patient information: challenges and opportunities.

    Science.gov (United States)

    Hutchison, Catherine; McCreaddie, May

    2007-11-01

    The aim of this project was to produce audiovisual patient information, which was user friendly and fit for purpose. The purpose of the audiovisual patient information is to inform patients about randomized controlled trials, as a supplement to their trial-specific written information sheet. Audiovisual patient information is known to be an effective way of informing patients about treatment. User involvement is also recognized as being important in the development of service provision. The aim of this paper is (i) to describe and discuss the process of developing the audiovisual patient information and (ii) to highlight the challenges and opportunities, thereby identifying implications for practice. A future study will test the effectiveness of the audiovisual patient information in the cancer clinical trial setting. An advisory group was set up to oversee the project and provide guidance in relation to information content, level and delivery. An expert panel of two patients provided additional guidance and a dedicated operational team dealt with the logistics of the project including: ethics; finance; scriptwriting; filming; editing and intellectual property rights. Challenges included the limitations of filming in a busy clinical environment, restricted technical and financial resources, ethical needs and issues around copyright. There were, however, substantial opportunities that included utilizing creative skills, meaningfully involving patients, teamworking and mutual appreciation of clinical, multidisciplinary and technical expertise. Developing audiovisual patient information is an important area for nurses to be involved with. However, this must be performed within the context of the multiprofessional team. Teamworking, including patient involvement, is crucial as a wide variety of expertise is required. Many aspects of the process are transferable and will provide information and guidance for nurses, regardless of specialty, considering developing this

  7. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... visual detection task. In our first experiment, observers presented with congruent and incongruent audiovisual sine-wave speech stimuli did only show a McGurk effect when informed of the speech nature of the stimulus. Performance on the secondary visual task was very good, thus supporting the finding...

  8. Situación actual de la traducción audiovisual en Colombia

    Directory of Open Access Journals (Sweden)

    Jeffersson David Orrego Carmona

    2010-05-01

    Full Text Available Objetivos: el presente artículo tiene dos objetivos: dar a conocer el panorama general del mercado actual de la traducción audiovisual en Colombia y resaltar la importancia de desarrollar estudios en esta área. Método: la metodología empleada incluyó investigación y lectura de bibliografía relacionada con el tema, aplicación de encuestas a diferentes grupos vinculados con la traducción audiovisual y el posterior análisis. Resultados: éstos mostraron el desconocimiento general que hay sobre esta labor y las preferencias de los grupos encuestados sobre las modalidades de traducción audiovisual. Se pudo observar que hay una marcada preferencia por el subtitulaje, por razones particulares de cada grupo. Conclusiones: los traductores colombianos necesitan un entrenamiento en traducción audiovisual para satisfacer las demandas del mercado y se resalta la importancia de desarrollar estudios más profundos enfocados en el desarrollo de la traducción audiovisual en Colombia.

  9. Control of training instrument

    International Nuclear Information System (INIS)

    Seo, K. W.; Joo, Y. C.; Park, J. C.; Hong, C. S.; Choi, I. K.; Cho, B. J.; Lee, H. Y.; Seo, I. S.; Park, N. K.

    1996-01-01

    This report describes the annual results on control of training instrument. The scope and contents are the following: 1. Control of Compact Nuclear Simulator 2. Control of Radiation/Radioactivity Measurement 3. Control of Non-Destructive Testing Equipment 4. Control of Chemical Equipment 5. Control of Personal Computer 6. Other related Lecture Aid Equipment. Efforts were employed to upgrade the training environment through retrofitting experimental facilities, compiling teaching materials and reforcing audio-visual aids. The Nuclear Training Center executed the open-door training courses for 2,496 engineers/scientists from the nuclear regulatory, nuclear industries, research institutes and other related organizations by means of offering 45 training courses during the fiscal year 1995. (author). 15 tabs., 7 figs., 13 refs

  10. A conceptual framework for audio-visual museum media

    DEFF Research Database (Denmark)

    Kirkedahl Lysholm Nielsen, Mikkel

    2017-01-01

    In today's history museums, the past is communicated through many other means than original artefacts. This interdisciplinary and theoretical article suggests a new approach to studying the use of audio-visual media, such as film, video and related media types, in a museum context. The centre...... and museum studies, existing case studies, and real life observations, the suggested framework instead stress particular characteristics of contextual use of audio-visual media in history museums, such as authenticity, virtuality, interativity, social context and spatial attributes of the communication...

  11. Understanding the basics of audiovisual archiving in Africa and the ...

    African Journals Online (AJOL)

    In the developed world, the cultural value of the audiovisual media gained legitimacy and widening acceptance after World War II, and this is what Africa still requires. There are a lot of problems in Africa, and because of this, activities such as preservation of a historical record, especially in the audiovisual media are seen as ...

  12. EXPERIMENTAL VALIDATION FOR THE TRAINING METHOD AND MATHEMATICAL MODEL OF THE PILOT SKILL FORMATION IN MAINTENANCE OF ATTITUDE ORIENTATION

    Directory of Open Access Journals (Sweden)

    Maksim BARABANOV

    2017-12-01

    Full Text Available In order to overcome the drawbacks in artificial horizon indicator (HI of inside-in type (a view from an aircraft (A/C, where pilots produce mistakes in maintenance of attitude orientation most of all, the authors offer a novel training method. The method is based on the hypothesis that the manipulative ability of a human visual system can be trained. A mathematical model for the data accumulation during the corresponding training procedure has been proposed. Construction, design and results of the model evaluation are presented in the article. The experimental results revealed the increase of the probability of faultless operation by the test group of up to 0,892, whereas the faultless operation probability of a control group was 0,726. Thus, the trainee-students have statistically increased the reliability for the maintenance of attitude orientation thanks to the proposed method, and the hypothesis was confirmed.

  13. Winter maintenance performance measure.

    Science.gov (United States)

    2016-01-01

    The Winter Performance Index is a method of quantifying winter storm events and the DOTs response to them. : It is a valuable tool for evaluating the States maintenance practices, performing post-storm analysis, training : maintenance personnel...

  14. Training report of the FBR cycle training facility in 2004FY

    International Nuclear Information System (INIS)

    Watanabe, Toshio; Sasaki, Kazuichi; Sawada, Makoto; Ohtsuka, Jirou

    2004-07-01

    The FBR cycle training facility consists of sodium handling training facility and maintenance training facility, and is being contributed to train for the operators and maintenance workers of the prototype fast breeder reactor 'Monju'. So far, some training courses have been added to the both training courses of sodium handling technologies maintenance technologies in every year in order to carry out be significant training for preparation of Monju restarting. As encouragement of the sodium handling technology training in 2003FY, the sodium heat transfer basic course was equipped as the 9th sodium handling training course with the aims of learning basic principal technology regarding sodium heat transfer. While, for the maintenance training course, a named 'Monju Systems Learning Training Course', which aims to learn necessary knowledge as the engineers related Monju development, was provided newly in this year as an improvement concerned the maintenance course. In 2003FY, nine sodium handling technology training courses were carried out total 33 times and 235 trainees took part in those training courses. Also, nine training courses concerning the maintenance technology held 15 times and total 113 trainees participated. On the other hand, the 4th special lecture related sodium technology by France sodium school instructor was held on Mar. 15-17 and 34 trainees participated. Consequently, a cumulative trainees since October in 2000 opened the FBR cycle training facility reached to 1,236 so far. (author)

  15. Audio-visual temporal recalibration can be constrained by content cues regardless of spatial overlap

    Directory of Open Access Journals (Sweden)

    Warrick eRoseboom

    2013-04-01

    Full Text Available It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this was necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; Experiment 1 and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; Experiment 2 we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  16. Personal Audiovisual Aptitude Influences the Interaction Between Landscape and Soundscape Appraisal.

    Science.gov (United States)

    Sun, Kang; Echevarria Sanchez, Gemma M; De Coensel, Bert; Van Renterghem, Timothy; Talsma, Durk; Botteldooren, Dick

    2018-01-01

    It has been established that there is an interaction between audition and vision in the appraisal of our living environment, and that this appraisal is influenced by personal factors. Here, we test the hypothesis that audiovisual aptitude influences appraisal of our sonic and visual environment. To measure audiovisual aptitude, an auditory deviant detection experiment was conducted in an ecologically valid and complex context. This experiment allows us to distinguish between accurate and less accurate listeners. Additionally, it allows to distinguish between participants that are easily visually distracted and those who are not. To do so, two previously conducted laboratory experiments were re-analyzed. The first experiment focuses on self-reported noise annoyance in a living room context, whereas the second experiment focuses on the perceived pleasantness of using outdoor public spaces. In the first experiment, the influence of visibility of vegetation on self-reported noise annoyance was modified by audiovisual aptitude. In the second one, it was found that the overall appraisal of walking across a bridge is influenced by audiovisual aptitude, in particular when a visually intrusive noise barrier is used to reduce highway traffic noise levels. We conclude that audiovisual aptitude may affect the appraisal of the living environment.

  17. 36 CFR 1237.26 - What materials and processes must agencies use to create audiovisual records?

    Science.gov (United States)

    2010-07-01

    ... must agencies use to create audiovisual records? 1237.26 Section 1237.26 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL, CARTOGRAPHIC, AND RELATED RECORDS MANAGEMENT § 1237.26 What materials and processes must agencies use to create audiovisual...

  18. Improving human performance in maintenance personnel

    International Nuclear Information System (INIS)

    Gonzalez Anez, Francisco; Agueero Agueero, Jorge

    2010-01-01

    The continuous evolution and improvement of safety-related processes has included the analysis, design and development of training plans for the qualification of maintenance nuclear power plant personnel. In this respect, the international references in this area recommend the establishment of systematic qualification programmes for personnel performing functions or carrying out safety related tasks. Maintenance personnel qualification processes have improved significantly, and training plans have been designed and developed based on Systematic Approach to Training methodology to each job position. These improvements have been clearly reflected in recent training programmes with new training material and training facilities focused not only on developing technical knowledge and skills but also on improving attitudes and safety culture. The objectives of maintenance training facilities such as laboratories, mock-ups real an virtual, hydraulic loops, field simulators and other training material to be used in the maintenance training centre are to cover training necessities for initial and continuous qualification. Evidently, all these improvements made in the qualification of plant personnel should be extended to include supplemental personnel (external or contracted) performing safety-related tasks. The supplemental personnel constitute a very spread group, covering the performance of multiple activities entailing different levels of responsibility. Some of these activities are performed permanently at the plant, while others are occasional or sporadic. In order to establish qualification requirements for these supplemental workers, it is recommended to establish a rigorous analysis of job positions and tasks. The objective will be to identify the qualification requirements to assure competence and safety. (authors)

  19. La estacíon de trabajo del traductor audiovisual: Herramientas y Recursos.

    Directory of Open Access Journals (Sweden)

    Anna Matamala

    2005-01-01

    Full Text Available In this article, we discuss the relationship between audiovisual translation and new technologies, and describe the characteristics of the audiovisual translator´s workstation, especially as regards dubbing and voiceover. After presenting the tools necessary for the translator to perform his/ her task satisfactorily as well as pointing to future perspectives, we make a list of sources that can be consulted in order to solve translation problems, including those available on the Internet. Keywords: audiovisual translation, new technologies, Internet, translator´s tools.

  20. A general audiovisual temporal processing deficit in adult readers with dyslexia

    NARCIS (Netherlands)

    Francisco, A.A.; Jesse, A.; Groen, M.A.; McQueen, J.M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with

  1. Does hearing aid use affect audiovisual integration in mild hearing impairment?

    Science.gov (United States)

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Colonius, Hans

    2018-04-01

    There is converging evidence for altered audiovisual integration abilities in hearing-impaired individuals and those with profound hearing loss who are provided with cochlear implants, compared to normal-hearing adults. Still, little is known on the effects of hearing aid use on audiovisual integration in mild hearing loss, although this constitutes one of the most prevalent conditions in the elderly and, yet, often remains untreated in its early stages. This study investigated differences in the strength of audiovisual integration between elderly hearing aid users and those with the same degree of mild hearing loss who were not using hearing aids, the non-users, by measuring their susceptibility to the sound-induced flash illusion. We also explored the corresponding window of integration by varying the stimulus onset asynchronies. To examine general group differences that are not attributable to specific hearing aid settings but rather reflect overall changes associated with habitual hearing aid use, the group of hearing aid users was tested unaided while individually controlling for audibility. We found greater audiovisual integration together with a wider window of integration in hearing aid users compared to their age-matched untreated peers. Signal detection analyses indicate that a change in perceptual sensitivity as well as in bias may underlie the observed effects. Our results and comparisons with other studies in normal-hearing older adults suggest that both mild hearing impairment and hearing aid use seem to affect audiovisual integration, possibly in the sense that hearing aid use may reverse the effects of hearing loss on audiovisual integration. We suggest that these findings may be particularly important for auditory rehabilitation and call for a longitudinal study.

  2. Multiple concurrent temporal recalibrations driven by audiovisual stimuli with apparent physical differences.

    Science.gov (United States)

    Yuan, Xiangyong; Bi, Cuihua; Huang, Xiting

    2015-05-01

    Out-of-synchrony experiences can easily recalibrate one's subjective simultaneity point in the direction of the experienced asynchrony. Although temporal adjustment of multiple audiovisual stimuli has been recently demonstrated to be spatially specific, perceptual grouping processes that organize separate audiovisual stimuli into distinctive "objects" may play a more important role in forming the basis for subsequent multiple temporal recalibrations. We investigated whether apparent physical differences between audiovisual pairs that make them distinct from each other can independently drive multiple concurrent temporal recalibrations regardless of spatial overlap. Experiment 1 verified that reducing the physical difference between two audiovisual pairs diminishes the multiple temporal recalibrations by exposing observers to two utterances with opposing temporal relationships spoken by one single speaker rather than two distinct speakers at the same location. Experiment 2 found that increasing the physical difference between two stimuli pairs can promote multiple temporal recalibrations by complicating their non-temporal dimensions (e.g., disks composed of two rather than one attribute and tones generated by multiplying two frequencies); however, these recalibration aftereffects were subtle. Experiment 3 further revealed that making the two audiovisual pairs differ in temporal structures (one transient and one gradual) was sufficient to drive concurrent temporal recalibration. These results confirm that the more audiovisual pairs physically differ, especially in temporal profile, the more likely multiple temporal perception adjustments will be content-constrained regardless of spatial overlap. These results indicate that multiple temporal recalibrations are based secondarily on the outcome of perceptual grouping processes.

  3. A pilot study of audiovisual family meetings in the intensive care unit.

    Science.gov (United States)

    de Havenon, Adam; Petersen, Casey; Tanana, Michael; Wold, Jana; Hoesch, Robert

    2015-10-01

    We hypothesized that virtual family meetings in the intensive care unit with conference calling or Skype videoconferencing would result in increased family member satisfaction and more efficient decision making. This is a prospective, nonblinded, nonrandomized pilot study. A 6-question survey was completed by family members after family meetings, some of which used conference calling or Skype by choice. Overall, 29 (33%) of the completed surveys came from audiovisual family meetings vs 59 (67%) from control meetings. The survey data were analyzed using hierarchical linear modeling, which did not find any significant group differences between satisfaction with the audiovisual meetings vs controls. There was no association between the audiovisual intervention and withdrawal of care (P = .682) or overall hospital length of stay (z = 0.885, P = .376). Although we do not report benefit from an audiovisual intervention, these results are preliminary and heavily influenced by notable limitations to the study. Given that the intervention was feasible in this pilot study, audiovisual and social media intervention strategies warrant additional investigation given their unique ability to facilitate communication among family members in the intensive care unit. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Exploring Student Perceptions of Audiovisual Feedback via Screencasting in Online Courses

    Science.gov (United States)

    Mathieson, Kathleen

    2012-01-01

    Using Moore's (1993) theory of transactional distance as a framework, this action research study explored students' perceptions of audiovisual feedback provided via screencasting as a supplement to text-only feedback. A crossover design was employed to ensure that all students experienced both text-only and text-plus-audiovisual feedback and to…

  5. La comunicación corporativa audiovisual: propuesta metodológica de estudio

    OpenAIRE

    Lorán Herrero, María Dolores

    2016-01-01

    Esta investigación, versa en torno a dos conceptos, la Comunicación Audiovisual y La Comunicación Corporativa, disciplinas que afectan a las organizaciones y que se van articulando de tal manera que dan lugar a la Comunicación Corporativa Audiovisual, concepto que se propone en esta tesis. Se realiza una clasificación y definición de los formatos que utilizan las organizaciones para su comunicación. Se trata de poder analizar cualquier documento audiovisual corporativo para constatar si el l...

  6. 77 FR 16561 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... INTERNATIONAL TRADE COMMISSION [DN 2884] Certain Audiovisual Components and Products Containing.... International Trade Commission has received a complaint entitled Certain Audiovisual Components and Products... audiovisual components and products containing the same. The complaint names as respondents Funai Electric...

  7. 36 CFR 1237.14 - What are the additional scheduling requirements for audiovisual, cartographic, and related records?

    Science.gov (United States)

    2010-07-01

    ... scheduling requirements for audiovisual, cartographic, and related records? 1237.14 Section 1237.14 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT AUDIOVISUAL... audiovisual, cartographic, and related records? The disposition instructions should also provide that...

  8. 77 FR 16560 - Certain Audiovisual Components and Products Containing the Same; Notice of Receipt of Complaint...

    Science.gov (United States)

    2012-03-21

    ... INTERNATIONAL TRADE COMMISSION [DN 2884] Certain Audiovisual Components and Products Containing.... International Trade Commission has received a complaint entitled Certain Audiovisual Components and Products... audiovisual components and products containing the same. The complaint names as respondents Funai Electric...

  9. Psychophysiological effects of audiovisual stimuli during cycle exercise.

    Science.gov (United States)

    Barreto-Silva, Vinícius; Bigliassi, Marcelo; Chierotti, Priscila; Altimari, Leandro R

    2018-05-01

    Immersive environments induced by audiovisual stimuli are hypothesised to facilitate the control of movements and ameliorate fatigue-related symptoms during exercise. The objective of the present study was to investigate the effects of pleasant and unpleasant audiovisual stimuli on perceptual and psychophysiological responses during moderate-intensity exercises performed on an electromagnetically braked cycle ergometer. Twenty young adults were administered three experimental conditions in a randomised and counterbalanced order: unpleasant stimulus (US; e.g. images depicting laboured breathing); pleasant stimulus (PS; e.g. images depicting pleasant emotions); and neutral stimulus (NS; e.g. neutral facial expressions). The exercise had 10 min of duration (2 min of warm-up + 6 min of exercise + 2 min of warm-down). During all conditions, the rate of perceived exertion and heart rate variability were monitored to further understanding of the moderating influence of audiovisual stimuli on perceptual and psychophysiological responses, respectively. The results of the present study indicate that PS ameliorated fatigue-related symptoms and reduced the physiological stress imposed by the exercise bout. Conversely, US increased the global activity of the autonomic nervous system and increased exertional responses to a greater degree when compared to PS. Accordingly, audiovisual stimuli appear to induce a psychophysiological response in which individuals visualise themselves within the story presented in the video. In such instances, individuals appear to copy the behaviour observed in the videos as if the situation was real. This mirroring mechanism has the potential to up-/down-regulate the cardiac work as if in fact the exercise intensities were different in each condition.

  10. Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.

    Science.gov (United States)

    Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko

    2017-08-15

    During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Maintenance proficiency evaluation test bank

    International Nuclear Information System (INIS)

    Maier, Loran

    2003-01-01

    The Maintenance Proficiency Evaluation Test Bank (MPETB) is an Electric Power Research Institute- (EPRJ-) operated, utility-sponsored means of developing, maintaining, and disseminating secure, high-quality written and performance maintenance proficiency tests. EPRTs charter is to ensure that all tests and test items that go into the Test Bank have been validated, screened for reliability, and evaluated to high standards of psychometric excellence. Proficiency tests of maintenance personnel.(mechanics, electricians, and instrumentation and control [I and C] technicians) are most often used to determine if an experienced employee is capable of performing maintenance tasks without further training. Such tests provide objective evidence for decisions to exempt an employee from what, for the employee, is unnecessary training. This leads to considerable savings in training costs and increased productivity because supervisors can assign personnel to tasks at which their competence is proven. The ultimate objective of proficiency evaluation is to ensure that qualified maintenance personnel are available to meet the maintenance requirements of the plant Numerous task-specific MPE tests (both written and performance) have been developed and validated using the EPRI MPE methodology by the utilities participating in the MPETB project A task-specific MPE consists of a multiple-choice written examination and a multi-step performance evaluation that can be used to assess an individual's present knowledge and skill level for a given maintenance task. The MPETB contains MPEs and test items for the mechanical, electrical, and I and C classifications that are readily available to participating utilities. Presently, utilities are placing emphasis on developing MPEs to evaluate outage-related maintenance tasks that demonstrate the competency and qualifications of plant and contractor personnel before the start of outage work. Utilities are also using the MPE methodology and process to

  12. AUTOMOTIVE DIESEL MAINTENANCE 1. UNIT XX, CUMMINS DIESEL ENGINE, MAINTENANCE SUMMARY.

    Science.gov (United States)

    Minnesota State Dept. of Education, St. Paul. Div. of Vocational and Technical Education.

    THIS MODULE OF A 30-MODULE COURSE IS DESIGNED TO PROVIDE A SUMMARY OF THE REASONS AND PROCEDURES FOR DIESEL ENGINE MAINTENANCE. TOPICS ARE WHAT ENGINE BREAK-IN MEANS, ENGINE BREAK-IN, TORQUING BEARINGS (TEMPLATE METHOD), AND THE NEED FOR MAINTENANCE. THE MODULE CONSISTS OF A SELF-INSTRUCTIONAL BRANCH PROGRAMED TRAINING FILM "CUMMINS DIESEL ENGINE…

  13. KAMAN PELAYANAN MEDIA AUDIOVISUAL: STUDI KASUS DI THE BRITISH COUNCIL JAKARTA

    Directory of Open Access Journals (Sweden)

    Hindar Purnomo

    2015-12-01

    Full Text Available Tujuan penelitian ini adalah untuk mengetahui cara penyelenggaraan pelayanan media AV, efektivitas pelayanan serta tingkat kepuasan pemustaka terhadap berbagai aspek pelayanan. Penelitian dilakukan di The British Council Jakarta dengan cara evaluasi karena dengan cara ini dapat diketahui berbagai fenomena yang terjadi. Perpustakaan British Council menyediakan tiga jenis media yaitu berupa kaset video, kaset audio, dan siaran televisi BBC. Subjek penelitian adalah pemakai jasa pelaya-nan media audiovisual yang terdaftar sebagai anggota. Subjek dikelompokkan berdasarkan kelompok usia dan kelompok tujuan pemanfaatan media AV. Data angket terkumpul sebanyak 157 responden (75,48% kemudian dianalisis secara statistik dengan uji analisis varian sate arah Kruskal-Wallis. Hasil penelitian menunjukkan bahwa ketiga media tersebut diminati oleh banyak pemakai terutama pada kelompok usia muda. Sebagian besar pemustaka lebih menyukai jenis fiksi dibandingkan jenis nonfiksi, mereka menggunakan media audiovisual untuk mencari informasi pengetahuan. Pelayanan media audiovisual terbukti sangat efektif dilihat dari angka keterpakaian koleksi maupun tingkat kepuasan pemakain. Hasil uji hipotesis menunjukkan bahwa antarkelompok usia maupun tujuan kegunaan tidak ada perbedaan yang berarti dalam menanggapi berbagai aspek pelayanan media audiovisual. Kata Kunci: MediaAudio Visual-Layanan Perpustakaan

  14. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    Science.gov (United States)

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  15. Claves para reconocer los niveles de lectura crítica audiovisual en el niño Keys to Recognizing the Levels of Critical Audiovisual Reading in Children

    Directory of Open Access Journals (Sweden)

    Jacqueline Sánchez Carrero

    2012-03-01

    workshops on media literacy. The groups had been instructed on the audiovisual universe, which allowed them to analyze, deconstruct and recreate audiovisual content. Firstly, this article refers to the evolving concept of media education. Secondly, the common experiences in the three countries are described, with special attention to the influence of indicators that gauge the level of critical reading. Finally, we reflect on the need for media education in the era of multi-literacy. It is unusual to find studies that reveal the keys to assessing the levels of critical consumption of digital media content in children, and this is essential for determining the level of children's understanding before and after training processes in media education.

  16. 36 CFR 1235.42 - What specifications and standards for transfer apply to audiovisual records, cartographic, and...

    Science.gov (United States)

    2010-07-01

    ... standards for transfer apply to audiovisual records, cartographic, and related records? 1235.42 Section 1235... Standards § 1235.42 What specifications and standards for transfer apply to audiovisual records... elements that are needed for future preservation, duplication, and reference for audiovisual records...

  17. Neuromorphic Audio-Visual Sensor Fusion on a Sound-Localising Robot

    Directory of Open Access Journals (Sweden)

    Vincent Yue-Sek Chan

    2012-02-01

    Full Text Available This paper presents the first robotic system featuring audio-visual sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localisation through self-motion and visual feedback, using an adaptive ITD-based sound localisation algorithm. After training, the robot can localise sound sources (white or pink noise in a reverberant environment with an RMS error of 4 to 5 degrees in azimuth. In the second part of the paper, we investigate the source binding problem. An experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. The results show that this technique can be quite effective, despite its simplicity.

  18. 16 CFR 307.8 - Requirements for disclosure in audiovisual and audio advertising.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Requirements for disclosure in audiovisual and audio advertising. 307.8 Section 307.8 Commercial Practices FEDERAL TRADE COMMISSION REGULATIONS... ACT OF 1986 Advertising Disclosures § 307.8 Requirements for disclosure in audiovisual and audio...

  19. A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection

    Science.gov (United States)

    Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing

    2015-01-01

    Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281

  20. A General Audiovisual Temporal Processing Deficit in Adult Readers with Dyslexia

    Science.gov (United States)

    Francisco, Ana A.; Jesse, Alexandra; Groen, Margriet A.; McQueen, James M.

    2017-01-01

    Purpose: Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. Method: We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of…

  1. Film Studies in Motion : From Audiovisual Essay to Academic Research Video

    NARCIS (Netherlands)

    Kiss, Miklós; van den Berg, Thomas

    2016-01-01

    Our (co-written with Thomas van den Berg) ‪media rich,‬ ‪‎open access‬ ‪‎Scalar‬ ‪e-book‬ on the ‪‎Audiovisual Essay‬ practice is available online: http://scalar.usc.edu/works/film-studies-in-motion Audiovisual essaying should be more than an appropriation of traditional video artistry, or a mere

  2. The organization and reorganization of audiovisual speech perception in the first year of life.

    Science.gov (United States)

    Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F

    2017-04-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.

  3. Ordinal models of audiovisual speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...

  4. Design Guidelines for the Development of Virtual Reality and Augmented Reality Training Systems for Maintenance and Assembly Tasks

    Directory of Open Access Journals (Sweden)

    Tecchia Franco

    2011-12-01

    Full Text Available The current work describes design guidelines for the development of Virtual Reality (VR and Augmented Reality (AR platforms to train technicians on maintenance and assembly tasks of industrial machineries. The main skill involved in this kind of tasks is the procedural skill. Based on past literature and studies conducted within the SKILLS project, several main design guidelines were formulated. First, observational learning integrated properly within the training protocol increases training efficiency. Second, training protocols combining physical and cognitive fidelity enhances procedural skills acquisition. Third, guidance aids should be provided in a proper and controlled way. And last, enriched information about the task helps trainees to develop a useful mental model of the task. These recommendations were implemented in both VR and AR training platforms.

  5. An operator training simulator based on interactive virtual teleoperation: nuclear facilities maintenance applications

    International Nuclear Information System (INIS)

    Kim, Ki Ho; Kim, Seung Ho

    1997-01-01

    Remote manipulation in nuclear hazardous environment is very often complex and difficult to operate and requires excessively careful preparation. Remote slave manipulators for unstructured work are manually controlled by a human operator. Small errors made by the operator via the master manipulator during operation can cause the slave to be surffered from excessive forces and result in considerable damages to the slave iteself and its environment. In this paper, we present a prototype of an operator training simulator for use in nuclear facilities maintenance applications, as part of the ongoing Nuclear Robotics Development Program at Korea Atomic Energy Research Institute (KAERI). The operator training simulator provides a means by which, in virtual task simulation, the operator can try out and train for expected remote tasks that the real slave manipulator will perform in advance. The operator interacts with both the virtual slave and task environment through the real master. Virtual interaction force feedback is provided to the operator. We also describe a man-in-the loop control scheme to realize bilateral force reflection in virtual teleoperation

  6. Audiovisual cultural heritage: bridging the gap between digital archives and its users

    NARCIS (Netherlands)

    Ongena, G.; Donoso, Veronica; Geerts, David; Cesar, Pablo; de Grooff, Dirk

    2009-01-01

    This document describes a PhD research track on the disclosure of audiovisual digital archives. The domain of audiovisual material is introduced as well as a problem description is formulated. The main research objective is to investigate the gap between the different users and the digital archives.

  7. Catching Audiovisual Interactions With a First-Person Fisherman Video Game.

    Science.gov (United States)

    Sun, Yile; Hickey, Timothy J; Shinn-Cunningham, Barbara; Sekuler, Robert

    2017-07-01

    The human brain is excellent at integrating information from different sources across multiple sensory modalities. To examine one particularly important form of multisensory interaction, we manipulated the temporal correlation between visual and auditory stimuli in a first-person fisherman video game. Subjects saw rapidly swimming fish whose size oscillated, either at 6 or 8 Hz. Subjects categorized each fish according to its rate of size oscillation, while trying to ignore a concurrent broadband sound seemingly emitted by the fish. In three experiments, categorization was faster and more accurate when the rate at which a fish oscillated in size matched the rate at which the accompanying, task-irrelevant sound was amplitude modulated. Control conditions showed that the difference between responses to matched and mismatched audiovisual signals reflected a performance gain in the matched condition, rather than a cost from the mismatched condition. The performance advantage with matched audiovisual signals was remarkably robust over changes in task demands between experiments. Performance with matched or unmatched audiovisual signals improved over successive trials at about the same rate, emblematic of perceptual learning in which visual oscillation rate becomes more discriminable with experience. Finally, analysis at the level of individual subjects' performance pointed to differences in the rates at which subjects can extract information from audiovisual stimuli.

  8. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception

    DEFF Research Database (Denmark)

    Baart, Martijn; Lindborg, Alma Cornelia; Andersen, Tobias S

    2017-01-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure...... of audiovisual integration) for fusions was comparable to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. This article is protected...

  9. Enhancing audiovisual experience with haptic feedback: a survey on HAV.

    Science.gov (United States)

    Danieau, F; Lecuyer, A; Guillotel, P; Fleureau, J; Mollet, N; Christie, M

    2013-01-01

    Haptic technology has been widely employed in applications ranging from teleoperation and medical simulation to art and design, including entertainment, flight simulation, and virtual reality. Today there is a growing interest among researchers in integrating haptic feedback into audiovisual systems. A new medium emerges from this effort: haptic-audiovisual (HAV) content. This paper presents the techniques, formalisms, and key results pertinent to this medium. We first review the three main stages of the HAV workflow: the production, distribution, and rendering of haptic effects. We then highlight the pressing necessity for evaluation techniques in this context and discuss the key challenges in the field. By building on existing technologies and tackling the specific challenges of the enhancement of audiovisual experience with haptics, we believe the field presents exciting research perspectives whose financial and societal stakes are significant.

  10. The effects of Crew Resource Management (CRM) training in airline maintenance: Results following three year's experience

    Science.gov (United States)

    Taylor, J. C.; Robertson, M. M.

    1995-01-01

    An airline maintenance department undertook a CRM training program to change its safety and operating culture. In 2 1/2 years this airline trained 2200 management staff and salaried professionals. Participants completed attitude surveys immediately before and after the training, as well as two months, six months, and one year afterward. On-site interviews were conducted to test and confirm the survey results. Comparing managers' attitudes immediately after their training with their pretraining attitudes showed significant improvement for three attitudes. A fourth attitude, assertiveness, improved significantly above the pretraining levels two months after training. The expected effect of the training on all four attitude scales did not change significantly thereafter. Participants' self-reported behaviors and interview comments confirmed their shift from passive to more active behaviors over time. Safety, efficiency, and dependability performance were measured before the onset of the training and for some 30 months afterward. Associations with subsequent performance were strongest with positive attitudes about sharing command (participation), assertiveness, and stress management when those attitudes were measured 2 and 12 months after the training. The two month follow-up survey results were especially strong and indicate that active behaviors learned from the CRM training consolidate and strengthen in the months immediately following training.

  11. Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-02-01

    We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.

  12. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  13. Audiovisual Association Learning in the Absence of Primary Visual Cortex

    OpenAIRE

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J.; de Gelder, Beatrice

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit ...

  14. [Intermodal timing cues for audio-visual speech recognition].

    Science.gov (United States)

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  15. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    OpenAIRE

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin?Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated, and opposing, estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possib...

  16. Detecting Functional Connectivity During Audiovisual Integration with MEG: A Comparison of Connectivity Metrics.

    Science.gov (United States)

    Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard

    2015-08-01

    In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.

  17. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Ensenyar amb casos audiovisuals en l'entorn virtual: metodologia i resultats

    OpenAIRE

    Triadó i Ivern, Xavier Ma.; Aparicio Chueca, Ma. del Pilar (María del Pilar); Jaría Chacón, Natalia; Gallardo-Gallardo, Eva; Elasri Ejjaberi, Amal

    2010-01-01

    Aquest quadern pretén posar i donar a conèixer les bases d'una metodologia que serveixi per engegar experiències d'aprenentatge amb casos audiovisuals en l'entorn del campus virtual. Per aquest motiu, s'ha definit un protocol metodològic per utilitzar els casos audiovisuals dins l'entorn del campus virtual a diferents assignatures.

  19. Audiovisual integration increases the intentional step synchronization of side-by-side walkers.

    Science.gov (United States)

    Noy, Dominic; Mouta, Sandra; Lamas, Joao; Basso, Daniel; Silva, Carlos; Santos, Jorge A

    2017-12-01

    When people walk side-by-side, they often synchronize their steps. To achieve this, individuals might cross-modally match audiovisual signals from the movements of the partner and kinesthetic, cutaneous, visual and auditory signals from their own movements. Because signals from different sensory systems are processed with noise and asynchronously, the challenge of the CNS is to derive the best estimate based on this conflicting information. This is currently thought to be done by a mechanism operating as a Maximum Likelihood Estimator (MLE). The present work investigated whether audiovisual signals from the partner are integrated according to MLE in order to synchronize steps during walking. Three experiments were conducted in which the sensory cues from a walking partner were virtually simulated. In Experiment 1 seven participants were instructed to synchronize with human-sized Point Light Walkers and/or footstep sounds. Results revealed highest synchronization performance with auditory and audiovisual cues. This was quantified by the time to achieve synchronization and by synchronization variability. However, this auditory dominance effect might have been due to artifacts of the setup. Therefore, in Experiment 2 human-sized virtual mannequins were implemented. Also, audiovisual stimuli were rendered in real-time and thus were synchronous and co-localized. All four participants synchronized best with audiovisual cues. For three of the four participants results point toward their optimal integration consistent with the MLE model. Experiment 3 yielded performance decrements for all three participants when the cues were incongruent. Overall, these findings suggest that individuals might optimally integrate audiovisual cues to synchronize steps during side-by-side walking. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Evidence for training-induced plasticity in multisensory brain structures: an MEG study.

    Directory of Open Access Journals (Sweden)

    Evangelos Paraskevopoulos

    Full Text Available Multisensory learning and resulting neural brain plasticity have recently become a topic of renewed interest in human cognitive neuroscience. Music notation reading is an ideal stimulus to study multisensory learning, as it allows studying the integration of visual, auditory and sensorimotor information processing. The present study aimed at answering whether multisensory learning alters uni-sensory structures, interconnections of uni-sensory structures or specific multisensory areas. In a short-term piano training procedure musically naive subjects were trained to play tone sequences from visually presented patterns in a music notation-like system [Auditory-Visual-Somatosensory group (AVS], while another group received audio-visual training only that involved viewing the patterns and attentively listening to the recordings of the AVS training sessions [Auditory-Visual group (AV]. Training-related changes in cortical networks were assessed by pre- and post-training magnetoencephalographic (MEG recordings of an auditory, a visual and an integrated audio-visual mismatch negativity (MMN. The two groups (AVS and AV were differently affected by the training. The results suggest that multisensory training alters the function of multisensory structures, and not the uni-sensory ones along with their interconnections, and thus provide an answer to an important question presented by cognitive models of multisensory training.

  1. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

    Science.gov (United States)

    Lidestam, Björn; Rönnberg, Jerker

    2016-01-01

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667

  2. Neurofunctional Underpinnings of Audiovisual Emotion Processing in Teens with Autism Spectrum Disorders

    Science.gov (United States)

    Doyle-Thomas, Krissy A.R.; Goldberg, Jeremy; Szatmari, Peter; Hall, Geoffrey B.C.

    2013-01-01

    Despite successful performance on some audiovisual emotion tasks, hypoactivity has been observed in frontal and temporal integration cortices in individuals with autism spectrum disorders (ASD). Little is understood about the neurofunctional network underlying this ability in individuals with ASD. Research suggests that there may be processing biases in individuals with ASD, based on their ability to obtain meaningful information from the face and/or the voice. This functional magnetic resonance imaging study examined brain activity in teens with ASD (n = 18) and typically developing controls (n = 16) during audiovisual and unimodal emotion processing. Teens with ASD had a significantly lower accuracy when matching an emotional face to an emotion label. However, no differences in accuracy were observed between groups when matching an emotional voice or face-voice pair to an emotion label. In both groups brain activity during audiovisual emotion matching differed significantly from activity during unimodal emotion matching. Between-group analyses of audiovisual processing revealed significantly greater activation in teens with ASD in a parietofrontal network believed to be implicated in attention, goal-directed behaviors, and semantic processing. In contrast, controls showed greater activity in frontal and temporal association cortices during this task. These results suggest that in the absence of engaging integrative emotional networks during audiovisual emotion matching, teens with ASD may have recruited the parietofrontal network as an alternate compensatory system. PMID:23750139

  3. Training of the Agency's inspectors

    International Nuclear Information System (INIS)

    Pontes, B.; Bates, G.; Dixon, G.

    1981-01-01

    The IAEA Safeguards inspectors are highly qualified professional staff. Their work, however, is a unique and specialized branch of knowledge and it is necessary to train those about to engage in it. Safeguards concepts, methods, practices and techniques are developing rapidly as more and more varied facilities come under international safeguards, needing more inspectors and other professional staff. Experienced inspectors also have to update their knowledge and skills. A Training Unit within the IAEA's Department of Safeguards meets these needs. The training programme for new as well as experienced inspectors is described. Extensive use is made in the training courses of television, videotaped material and other audiovisual aids. A substantial contribution is made to the training of the IAEA's inspectors by the support programmes of Member States

  4. Plan empresa productora de audiovisuales : La Central Audiovisual y Publicidad

    OpenAIRE

    Arroyave Velasquez, Alejandro

    2015-01-01

    El presente documento corresponde al plan de creación de empresa La Central Publicidad y Audiovisual, una empresa dedicada a la pre-producción, producción y post-producción de material de tipo audiovisual. La empresa estará ubicada en la ciudad de Cali y tiene como mercado objetivo atender los diferentes tipos de empresas de la ciudad, entre las cuales se encuentran las pequeñas, medianas y grandes empresas.

  5. 49 CFR 236.919 - Operations and Maintenance Manual.

    Science.gov (United States)

    2010-10-01

    ..., INSPECTION, MAINTENANCE, AND REPAIR OF SIGNAL AND TRAIN CONTROL SYSTEMS, DEVICES, AND APPLIANCES Standards for Processor-Based Signal and Train Control Systems § 236.919 Operations and Maintenance Manual. (a... identify all software versions, revisions, and revision dates. Plans must be legible and correct. (c...

  6. 49 CFR 236.1039 - Operations and Maintenance Manual.

    Science.gov (United States)

    2010-10-01

    ..., INSPECTION, MAINTENANCE, AND REPAIR OF SIGNAL AND TRAIN CONTROL SYSTEMS, DEVICES, AND APPLIANCES Positive Train Control Systems § 236.1039 Operations and Maintenance Manual. (a) The railroad shall catalog and... software versions, revisions, and revision dates. Plans must be legible and correct. (c) Hardware, software...

  7. Audiovisual Speech Perception and Eye Gaze Behavior of Adults with Asperger Syndrome

    Science.gov (United States)

    Saalasti, Satu; Katsyri, Jari; Tiippana, Kaisa; Laine-Hernandez, Mari; von Wendt, Lennart; Sams, Mikko

    2012-01-01

    Audiovisual speech perception was studied in adults with Asperger syndrome (AS), by utilizing the McGurk effect, in which conflicting visual articulation alters the perception of heard speech. The AS group perceived the audiovisual stimuli differently from age, sex and IQ matched controls. When a voice saying /p/ was presented with a face…

  8. BWR nuclear plant maintenance simulation

    International Nuclear Information System (INIS)

    Stuart, I.F.

    1985-01-01

    As early as 1977, the General Electric Company, USA, Nuclear Energy Operation was making plans to construct a maintenance-type simulator to support Training and Services. The Company's pioneering experience with control room simulators started in 1968 with the Dresden simulator and showed clearly the benefits of having such facilities for training, checkout of procedures and, in the case of maintenance, match-up of equipment or tools as needed. Since the dedication of the facility, it has proved to be an invaluable resource in the training of refuelling and servicing crews. The facility has also been extensively used as developmental and test facility for in-vessel servicing equipment and procedures. (author)

  9. Safety of nuclear operation and maintenance

    International Nuclear Information System (INIS)

    Mori, M.; Nitta, T.; Sakai, K.

    1994-01-01

    The Kansai Electric Power Co. Inc.(Kansai EPC) aims to pursue a high quality and highly reliable operation in nuclear power generation in order to ensure safety by reducing the risk of accidents and win the confidence from the society and the public. It is emphasised that in order to realize this aim manufacturers and contractors cooperate with each other in performing high quality maintenance through plant lifetime maintenance system. TQC (Total Quality Control) activity enhances the motivation for each individual to have a quality-oriented mind and cultivate the safety culture. Under the lifetime employment practice, Kansai EPC and maintenance contractors can conduct systematic education and training, and the Maintenance Training Center helps to make it effective. 6 figs

  10. Audiovisual integration for speech during mid-childhood: Electrophysiological evidence

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer

    2014-01-01

    Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7–8-year-olds and 10–11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception. PMID:25463815

  11. Effect of Audiovisual Treatment Information on Relieving Anxiety in Patients Undergoing Impacted Mandibular Third Molar Removal.

    Science.gov (United States)

    Choi, Sung-Hwan; Won, Ji-Hoon; Cha, Jung-Yul; Hwang, Chung-Ju

    2015-11-01

    The authors hypothesized that an audiovisual slide presentation that provided treatment information regarding the removal of an impacted mandibular third molar could improve patient knowledge of postoperative complications and decrease anxiety in young adults before and after surgery. A group that received an audiovisual description was compared with a group that received the conventional written description of the procedure. This randomized clinical trial included young adult patients who required surgical removal of an impacted mandibular third molar and fulfilled the predetermined criteria. The predictor variable was the presentation of an audiovisual slideshow. The audiovisual informed group provided informed consent after viewing an audiovisual slideshow. The control group provided informed consent after reading a written description of the procedure. The outcome variables were the State-Trait Anxiety Inventory, the Dental Anxiety Scale, a self-reported anxiety questionnaire, completed immediately before and 1 week after surgery, and a postoperative questionnaire about the level of understanding of potential postoperative complications. The data were analyzed with χ(2) tests, independent t tests, Mann-Whitney U  tests, and Spearman rank correlation coefficients. Fifty-one patients fulfilled the inclusion criteria. The audiovisual informed group was comprised of 20 men and 5 women; the written informed group was comprised of 21 men and 5 women. The audiovisual informed group remembered significantly more information than the control group about a potential allergic reaction to local anesthesia or medication and potential trismus (P audiovisual informed group had lower self-reported anxiety scores than the control group 1 week after surgery (P audiovisual slide presentation could improve patient knowledge about postoperative complications and aid in alleviating anxiety after the surgical removal of an impacted mandibular third molar. Copyright © 2015

  12. Academic e-learning experience in the enhancement of open access audiovisual and media education

    OpenAIRE

    Pacholak, Anna; Sidor, Dorota

    2015-01-01

    The paper presents how the academic e-learning experience and didactic methods of the Centre for Open and Multimedia Education (COME UW), University of Warsaw, enhance the open access to audiovisual and media education at various levels of education. The project is implemented within the Audiovisual and Media Education Programme (PEAM). It is funded by the Polish Film Institute (PISF). The aim of the project is to create a proposal of a comprehensive and open programme for the audiovisual (me...

  13. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Yanna Ren

    2018-01-01

    Full Text Available The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson’s disease (PD. This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p0.05. The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  14. Maintenance of exercise training benefits is associated with adequate milk and dairy products intake in elderly hypertensive subjects following detraining.

    Science.gov (United States)

    Moraes, Wilson Max Almeida Monteiro de; Santos, Neucilane Silveira Dos; Aguiar, Larissa Pereira; Sousa, Luís Gustavo Oliveira de

    2017-01-01

    To investigate whether maintenance of exercise training benefits is associated with adequate milk and dairy products intake in hypertensive elderly subjects after detraining. Twenty-eight elderly hypertensive patients with optimal clinical treatment underwent 16 weeks of multicomponent exercise training program followed by 6 weeks of detraining, and were classified according to milk and dairy products intake as low milk (exercise training, there was a significant reduction (pexercise training benefits related to pressure levels, lower extremity strength and aerobic capacity, is associated with adequate milk and dairy products intake in hypertensive elderly subjects following 6 weeks of detraining.

  15. Electrophysiological evidence for differences between fusion and combination illusions in audiovisual speech perception.

    Science.gov (United States)

    Baart, Martijn; Lindborg, Alma; Andersen, Tobias S

    2017-11-01

    Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure of audiovisual integration) for fusions was similar to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli. © 2017 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. Development of a mechanical maintenance training simulator in OpenSimulator for F-16 aircraft engines

    OpenAIRE

    Pinheiro, André; Fernandes, Paulo; Maia, Ana; Cruz, Gonçalo; Pedrosa, Daniela; Fonseca, Benjamim; Paredes, Hugo; Martins, Paulo; Morgado, Leonel; Rafael, Jorge

    2014-01-01

    Mechanical maintenance of F-16 engines is carried out as a team effort involving 3–4 skilled engine technicians, but the details of its procedures and requisites change constantly, to improve safety, optimize resources, and respond to knowledge learned from field outcomes. This provides a challenge for development of training simulators, since simulated actions risk becoming obsolete rapidly and require costly reimplementation. This paper presents the development of a 3D mechanical maintenanc...

  17. Audiovisual Asynchrony Detection in Human Speech

    Science.gov (United States)

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  18. Neural correlates of audiovisual speech processing in a second language.

    Science.gov (United States)

    Barrós-Loscertales, Alfonso; Ventura-Campos, Noelia; Visser, Maya; Alsius, Agnès; Pallier, Christophe; Avila Rivera, César; Soto-Faraco, Salvador

    2013-09-01

    Neuroimaging studies of audiovisual speech processing have exclusively addressed listeners' native language (L1). Yet, several behavioural studies now show that AV processing plays an important role in non-native (L2) speech perception. The current fMRI study measured brain activity during auditory, visual, audiovisual congruent and audiovisual incongruent utterances in L1 and L2. BOLD responses to congruent AV speech in the pSTS were stronger than in either unimodal condition in both L1 and L2. Yet no differences in AV processing were expressed according to the language background in this area. Instead, the regions in the bilateral occipital lobe had a stronger congruency effect on the BOLD response (congruent higher than incongruent) in L2 as compared to L1. According to these results, language background differences are predominantly expressed in these unimodal regions, whereas the pSTS is similarly involved in AV integration regardless of language dominance. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Threats and opportunities for new audiovisual cultural heritage archive services: the Dutch case

    NARCIS (Netherlands)

    Ongena, G.; Huizer, E.; van de Wijngaert, Lidwien

    2012-01-01

    Purpose The purpose of this paper is to analyze the business-to-consumer market for digital audiovisual archiving services. In doing so we identify drivers, threats, and opportunities for new services based on audiovisual archives in the cultural heritage domain. By analyzing the market we provide

  20. Investigating the impact of audio instruction and audio-visual biofeedback for lung cancer radiation therapy

    Science.gov (United States)

    George, Rohini

    Lung cancer accounts for 13% of all cancers in the Unites States and is the leading cause of deaths among both men and women. The five-year survival for lung cancer patients is approximately 15%.(ACS facts & figures) Respiratory motion decreases accuracy of thoracic radiotherapy during imaging and delivery. To account for respiration, generally margins are added during radiation treatment planning, which may cause a substantial dose delivery to normal tissues and increase the normal tissue toxicity. To alleviate the above-mentioned effects of respiratory motion, several motion management techniques are available which can reduce the doses to normal tissues, thereby reducing treatment toxicity and allowing dose escalation to the tumor. This may increase the survival probability of patients who have lung cancer and are receiving radiation therapy. However the accuracy of these motion management techniques are inhibited by respiration irregularity. The rationale of this thesis was to study the improvement in regularity of respiratory motion by breathing coaching for lung cancer patients using audio instructions and audio-visual biofeedback. A total of 331 patient respiratory motion traces, each four minutes in length, were collected from 24 lung cancer patients enrolled in an IRB-approved breathing-training protocol. It was determined that audio-visual biofeedback significantly improved the regularity of respiratory motion compared to free breathing and audio instruction, thus improving the accuracy of respiratory gated radiotherapy. It was also observed that duty cycles below 30% showed insignificant reduction in residual motion while above 50% there was a sharp increase in residual motion. The reproducibility of exhale based gating was higher than that of inhale base gating. Modeling the respiratory cycles it was found that cosine and cosine 4 models had the best correlation with individual respiratory cycles. The overall respiratory motion probability distribution

  1. Lousa Digital Interativa: avaliação da interação didática e proposta de aplicação de narrativa audiovisual / Interactive White Board – IWB: assessment in interaction didactic and audiovisual narrative proposal

    Directory of Open Access Journals (Sweden)

    Francisco García García

    2011-04-01

    Full Text Available O uso de audiovisual em sala de aula não garante uma eficácia na aprendizagem, mas para os estudantes é um elemento interessante e ainda atrativo. Este trabalho — uma aproximação de duas pesquisas: a primeira apresenta a importância da interação didática com a LDI e a segunda, uma lista de elementos de narrativa audiovisual que podem ser aplicados em sala de aula — propõe o domínio de elementos da narrativa audiovisual como uma possibilidade teórica para o professor que quer produzir um conteúdo audiovisual para aplicar em plataformas digitais, como é o caso da Lousa Digital Interativa - LDI. O texto está divido em três partes: a primeira apresenta os conceitos teóricos das duas pesquisas, a segunda discute os resultados de ambas e, por fim, a terceira parte propõe uma prática pedagógica de interação didática com elementos de narrativa audiovisual para uso em LDI. AbstractThe audiovisual use in classroom does not guarantee effectiveness in learning, but for students is an interesting element and still attractive. This work suggests that the field of audiovisual elements of the narrative is a theoretical possibility for the teacher who wants to produce an audiovisual content to apply to digital platforms, such as the Interactive Digital Whiteboard - LDI. This work is an approximation of two doctoral theses, the first that shows the importance of interaction with the didactic and the second LDI provides a list of audiovisual narrative elements that can be applied in the classroom. This work is divided into three parts, the first part presents the theoretical concepts of the two surveys, the second part discusses the results of two surveys and finally the third part, proposes a practical pedagogical didactic interaction with audiovisual narrative elements to use in LDI.

  2. The efficacy of stuttering measurement training: evaluating two training programs.

    Science.gov (United States)

    Bainbridge, Lauren A; Stavros, Candace; Ebrahimian, Mineh; Wang, Yuedong; Ingham, Roger J

    2015-04-01

    Two stuttering measurement training programs currently used for training clinicians were evaluated for their efficacy in improving the accuracy of total stuttering event counting. Four groups, each with 12 randomly allocated participants, completed a pretest-posttest design training study. They were evaluated by their counts of stuttering events on eight 3-min audiovisual speech samples from adults and children who stutter. Stuttering judgment training involved use of either the Stuttering Measurement System (SMS), Stuttering Measurement Assessment and Training (SMAAT) programs, or no training. To test for the reliability of any training effect, SMS training was repeated with the 4th group. Both SMS-trained groups produced approximately 34% improvement, significantly better than no training or the SMAAT program. The SMAAT program produced a mixed result. The SMS program was shown to produce a "medium" effect size improvement in the accuracy of stuttering event counts, and this improvement was almost perfectly replicated in a 2nd group. Half of the SMAAT judges produced a 36% improvement in accuracy, but the other half showed no improvement. Additional studies are needed to demonstrate the durability of the reported improvements, but these positive effects justify the importance of stuttering measurement training.

  3. Sustainable models of audiovisual commons

    Directory of Open Access Journals (Sweden)

    Mayo Fuster Morell

    2013-03-01

    Full Text Available This paper addresses an emerging phenomenon characterized by continuous change and experimentation: the collaborative commons creation of audiovisual content online. The analysis wants to focus on models of sustainability of collaborative online creation, paying particular attention to the use of different forms of advertising. This article is an excerpt of a larger investigation, which unit of analysis are cases of Online Creation Communities that take as their central node of activity the Catalan territory. From 22 selected cases, the methodology combines quantitative analysis, through a questionnaire delivered to all cases, and qualitative analysis through face interviews conducted in 8 cases studied. The research, which conclusions we summarize in this article,in this article, leads us to conclude that the sustainability of the project depends largely on relationships of trust and interdependence between different voluntary agents, the non-monetary contributions and retributions as well as resources and infrastructure of free use. All together leads us to understand that this is and will be a very important area for the future of audiovisual content and its sustainability, which will imply changes in the policies that govern them.

  4. Contingent post-closure plan, hazardous waste management units at selected maintenance facilities, US Army National Training Center, Fort Irwin, California

    International Nuclear Information System (INIS)

    1992-01-01

    The National Training Center (NTC) at Fort Irwin, California, is a US Army training installation that provides tactical experience for battalion/task forces and squadrons in a mid- to high-intensity combat scenario. Through joint exercises with US Air Force and other services, the NTC also provides a data source for improvements of training doctrines, organization, and equipment. To meet the training and operational needs of the NTC, several maintenance facilities provide general and direct support for mechanical devices, equipment, and vehicles. Maintenance products used at these facilities include fuels, petroleum-based oils, lubricating grease, various degreasing solvents, antifreeze (ethylene glycol), transmission fluid, brake fluid, and hydraulic oil. Used or spent petroleum-based products generated at the maintenance facilities are temporarily accumulated in underground storage tanks (USTs), collected by the NTC hazardous waste management contractor (HAZCO), and stored at the Petroleum, Oil, and Lubricant (POL) Storage Facility, Building 630, until shipped off site to be recovered, reused, and/or reclaimed. Spent degreasing solvents and other hazardous wastes are containerized and stored on-base for up to 90 days at the NTC's Hazardous Waste Storage Facility, Building 703. The US Environmental Protection Agency (EPA) performed an inspection and reviewed the hazardous waste management operations of the NTC. Inspections indicated that the NTC had violated one or more requirements of Subtitle C of the Resource Conservation and Recovery Act (RCRA) and as a result of these violations was issued a Notice of Noncompliance, Notice of Necessity for Conference, and Proposed Compliance Schedule (NON) dated October 13, 1989. The following post-closure plan is the compliance-based approach for the NTC to respond to the regulatory violations cited in the NON

  5. Longevity and Depreciation of Audiovisual Equipment.

    Science.gov (United States)

    Post, Richard

    1987-01-01

    Describes results of survey of media service directors at public universities in Ohio to determine the expected longevity of audiovisual equipment. Use of the Delphi technique for estimates is explained, results are compared with an earlier survey done in 1977, and use of spreadsheet software to calculate depreciation is discussed. (LRW)

  6. Audiovisual quality assessment in communications applications: Current status, trends and challenges

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2010-01-01

    Audiovisual quality assessment is one of the major challenges in multimedia communications. Traditionally, algorithm-based (objective) assessment methods have focused primarily on the compression artifacts. However, compression is only one of the numerous factors influencing the perception...... addressed in practical quality metrics is the co-impact of audio and video qualities. This paper provides an overview of the current trends and challenges in objective audiovisual quality assessment, with emphasis on communication applications...

  7. Alfasecuencialización: la enseñanza del cine en la era del audiovisual Sequential literacy: the teaching of cinema in the age of audio-visual speech

    Directory of Open Access Journals (Sweden)

    José Antonio Palao Errando

    2007-10-01

    Full Text Available En la llamada «sociedad de la información» los estudios sobre cine se han visto diluidos en el abordaje pragmático y tecnológico del discurso audiovisual, así como la propia fruición del cine se ha visto atrapada en la red del DVD y del hipertexto. El propio cine reacciona ante ello a través de estructuras narrativas complejas que lo alejan del discurso audiovisual estándar. La función de los estudios sobre cine y de su enseñanza universitaria debe ser la reintroducción del sujeto rechazado del saber informativo por medio de la interpretación del texto fílmico. In the so called «information society», film studies have been diluted in the pragmatic and technological approaching of the audiovisual speech, as well as the own fruition of the cinema has been caught in the net of DVD and hypertext. The cinema itself reacts in the face of it through complex narrative structures that take it away from the standard audio-visual speech. The function of film studies at the university education should be the reintroduction of the rejected subject of the informative knowledge by means of the interpretation of film text.

  8. A economia do audiovisual no contexto contemporâneo das Cidades Criativas

    Directory of Open Access Journals (Sweden)

    Paulo Celso da Silva

    2012-12-01

    Full Text Available Este trabalho aborda a economia do audiovisual em cidades com status de criativas. Mais do que um adjetivo, é no bojo das atividades ligadas à comunicação, o audiovisual entre elas, cultura, moda, arquitetura, artes manuais ou artesanato local, que tais cidades renovaram a forma de acumulação, reorganizando espaços públicos e privados. As cidades de  Barcelona, Berlim, New York, Milão e São Paulo, são representativas para atingir o objetivo de analisar as cidades relacionado ao desenvolvimento do setor audiovisual. Ainda que tal hipótese possa parecer indicar, através de dados oficiais que auxiliam em uma compreensão mais realista de cada uma delas.

  9. Venezuela: Nueva Experiencia Audiovisual

    Directory of Open Access Journals (Sweden)

    Revista Chasqui

    2015-01-01

    Full Text Available La Universidad Simón Bolívar (USB creó en 1986, la Fundación para el Desarrollo del Arte Audiovisual, ARTEVISION. Su objetivo general es la promoción y venta de servicios y productos para la televisión, radio, cine, diseño y fotografía de alta calidad artística y técnica. Todo esto sin descuidar los aspectos teóricos-académicos de estas disciplinas.

  10. Audiovisual Narrative Creation and Creative Retrieval: How Searching for a Story Shapes the Story

    NARCIS (Netherlands)

    Sauer, Sabrina

    2017-01-01

    Media professionals – such as news editors, image researchers, and documentary filmmakers - increasingly rely on online access to digital content within audiovisual archives to create narratives. Retrieving audiovisual sources therefore requires an in-depth knowledge of how to find sources

  11. Selective attention modulates the direction of audio-visual temporal recalibration.

    Science.gov (United States)

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  12. Selective attention modulates the direction of audio-visual temporal recalibration.

    Directory of Open Access Journals (Sweden)

    Nara Ikumi

    Full Text Available Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging, was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  13. Audiovisual integration of speech falters under high attention demands.

    Science.gov (United States)

    Alsius, Agnès; Navarra, Jordi; Campbell, Ruth; Soto-Faraco, Salvador

    2005-05-10

    One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a general basis for this capacity. One critical question, however, concerns the role of attention in such multisensory integration. Although both behavioral and neurophysiological studies have converged on a preattentive conceptualization of audiovisual speech integration, this mechanism has rarely been measured under conditions of high attentional load, when the observers' attention resources are depleted. We tested the extent to which audiovisual integration was modulated by the amount of available attentional resources by measuring the observers' susceptibility to the classic McGurk illusion in a dual-task paradigm. The proportion of visually influenced responses was severely, and selectively, reduced if participants were concurrently performing an unrelated visual or auditory task. In contrast with the assumption that crossmodal speech integration is automatic, our results suggest that these multisensory binding processes are subject to attentional demands.

  14. Audiovisual speech integration in the superior temporal region is dysfunctional in dyslexia.

    Science.gov (United States)

    Ye, Zheng; Rüsseler, Jascha; Gerth, Ivonne; Münte, Thomas F

    2017-07-25

    Dyslexia is an impairment of reading and spelling that affects both children and adults even after many years of schooling. Dyslexic readers have deficits in the integration of auditory and visual inputs but the neural mechanisms of the deficits are still unclear. This fMRI study examined the neural processing of auditorily presented German numbers 0-9 and videos of lip movements of a German native speaker voicing numbers 0-9 in unimodal (auditory or visual) and bimodal (always congruent) conditions in dyslexic readers and their matched fluent readers. We confirmed results of previous studies that the superior temporal gyrus/sulcus plays a critical role in audiovisual speech integration: fluent readers showed greater superior temporal activations for combined audiovisual stimuli than auditory-/visual-only stimuli. Importantly, such an enhancement effect was absent in dyslexic readers. Moreover, the auditory network (bilateral superior temporal regions plus medial PFC) was dynamically modulated during audiovisual integration in fluent, but not in dyslexic readers. These results suggest that superior temporal dysfunction may underly poor audiovisual speech integration in readers with dyslexia. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. 78 FR 63243 - Certain Audiovisual Components and Products Containing the Same; Commission Determination To...

    Science.gov (United States)

    2013-10-23

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-837] Certain Audiovisual Components and Products Containing the Same; Commission Determination To Review a Final Initial Determination Finding a... section 337 as to certain audiovisual components and products containing the same with respect to claims 1...

  16. Rhythmic synchronization tapping to an audio-visual metronome in budgerigars.

    Science.gov (United States)

    Hasegawa, Ai; Okanoya, Kazuo; Hasegawa, Toshikazu; Seki, Yoshimasa

    2011-01-01

    In all ages and countries, music and dance have constituted a central part in human culture and communication. Recently, vocal-learning animals such as parrots and elephants have been found to share rhythmic ability with humans. Thus, we investigated the rhythmic synchronization of budgerigars, a vocal-mimicking parrot species, under controlled conditions and a systematically designed experimental paradigm as a first step in understanding the evolution of musical entrainment. We trained eight budgerigars to perform isochronous tapping tasks in which they pecked a key to the rhythm of audio-visual metronome-like stimuli. The budgerigars showed evidence of entrainment to external stimuli over a wide range of tempos. They seemed to be inherently inclined to tap at fast tempos, which have a similar time scale to the rhythm of budgerigars' natural vocalizations. We suggest that vocal learning might have contributed to their performance, which resembled that of humans.

  17. Search in audiovisual broadcast archives : doctoral abstract

    NARCIS (Netherlands)

    Huurnink, B.

    Documentary makers, journalists, news editors, and other media professionals routinely require previously recorded audiovisual material for new productions. For example, a news editor might wish to reuse footage shot by overseas services for the evening news, or a documentary maker might require

  18. Audiovisual distraction for pain relief in paediatric inpatients: A crossover study.

    Science.gov (United States)

    Oliveira, N C A C; Santos, J L F; Linhares, M B M

    2017-01-01

    Pain is a stressful experience that can have a negative impact on child development. The aim of this crossover study was to examine the efficacy of audiovisual distraction for acute pain relief in paediatric inpatients. The sample comprised 40 inpatients (6-11 years) who underwent painful puncture procedures. The participants were randomized into two groups, and all children received the intervention and served as their own controls. Stress and pain-catastrophizing assessments were initially performed using the Child Stress Scale and Pain Catastrophizing Scale for Children, with the aim of controlling these variables. The pain assessment was performed using a Visual Analog Scale and the Faces Pain Scale-Revised after the painful procedures. Group 1 received audiovisual distraction before and during the puncture procedure, which was performed again without intervention on another day. The procedure was reversed in Group 2. Audiovisual distraction used animated short films. A 2 × 2 × 2 analysis of variance for 2 × 2 crossover study was performed, with a 5% level of statistical significance. The two groups had similar baseline measures of stress and pain catastrophizing. A significant difference was found between periods with and without distraction in both groups, in which scores on both pain scales were lower during distraction compared with no intervention. The sequence of exposure to the distraction intervention in both groups and first versus second painful procedure during which the distraction was performed also significantly influenced the efficacy of the distraction intervention. Audiovisual distraction effectively reduced the intensity of pain perception in paediatric inpatients. The crossover study design provides a better understanding of the power effects of distraction for acute pain management. Audiovisual distraction was a powerful and effective non-pharmacological intervention for pain relief in paediatric inpatients. The effects were

  19. The natural statistics of audiovisual speech.

    Directory of Open Access Journals (Sweden)

    Chandramouli Chandrasekaran

    2009-07-01

    Full Text Available Humans, like other animals, are exposed to a continuous stream of signals, which are dynamic, multimodal, extended, and time varying in nature. This complex input space must be transduced and sampled by our sensory systems and transmitted to the brain where it can guide the selection of appropriate actions. To simplify this process, it's been suggested that the brain exploits statistical regularities in the stimulus space. Tests of this idea have largely been confined to unimodal signals and natural scenes. One important class of multisensory signals for which a quantitative input space characterization is unavailable is human speech. We do not understand what signals our brain has to actively piece together from an audiovisual speech stream to arrive at a percept versus what is already embedded in the signal structure of the stream itself. In essence, we do not have a clear understanding of the natural statistics of audiovisual speech. In the present study, we identified the following major statistical features of audiovisual speech. First, we observed robust correlations and close temporal correspondence between the area of the mouth opening and the acoustic envelope. Second, we found the strongest correlation between the area of the mouth opening and vocal tract resonances. Third, we observed that both area of the mouth opening and the voice envelope are temporally modulated in the 2-7 Hz frequency range. Finally, we show that the timing of mouth movements relative to the onset of the voice is consistently between 100 and 300 ms. We interpret these data in the context of recent neural theories of speech which suggest that speech communication is a reciprocally coupled, multisensory event, whereby the outputs of the signaler are matched to the neural processes of the receiver.

  20. Contracting, An Alarming Trend in Aviation Maintenance

    National Research Council Canada - National Science Library

    Brooke, J

    1998-01-01

    .... Aviation operational and maintenance units struggle to balance peacetime requirements for general military and technical training, organization and installation support, training and operational...

  1. Graduate Education and Simulation Training for CBRNE Disasters Using a Multimodal Approach to Learning. Part 2: Education and Training from the Perspectives of Educators and Students

    Science.gov (United States)

    2013-08-01

    quantify learning effectiveness and retention rates by comparing didactic lectures, reading, audiovisual presentations, demonstrations, discussion...Graduate Education and Simulation Training   for CBRNE Disasters Using a Multimodal  Approach to  Learning   Part 2: Education and Training from the...TITLE AND SUBTITLE Graduate Education and Simulation Training for CBRNE Disasters Using a Multimodal 5a. CONTRACT NUMBER Approach to Learning

  2. Presentación: Narrativas de no ficción audiovisual, interactiva y transmedia

    Directory of Open Access Journals (Sweden)

    Arnau Gifreu Castells

    2015-03-01

    Full Text Available El número 8 de la Revista profundiza en las formas de expresión narrativas de no ficción audiovisual, interactiva y transmedia. A lo largo de la historia de la comunicación, el ámbito de la no ficción siempre ha sido considerado como menor respecto de su homónimo de ficción. Esto sucede también en el campo de la investigación, donde las narrativas de ficción audiovisual, interactiva y transmedia siempre han ido un paso por delante de las de no ficción. Este monográfico propone un acercamiento teórico-práctico a narrativas de no ficción como el documental, el reportaje, el ensayo, los formatos educativos o las películas institucionales, con el propósito de ofrecer una radiografía de su ubicación actual en el ecosistema de medios.  Audiovisual, interactive and transmedia non-fiction Abstract Number 8 of  Obra Digital Revista de Comunicación  explores  audiovisual, interactive and transmedia non-fiction narrative expression forms. Throughout the history of communication the field of non-fiction has always been regarded as less than its fictional namesake. This is also true in the field of research, where the studies into audiovisual, interactive and transmedia fiction narratives have always been one step ahead of the studies into nonfiction narratives. This monograph proposes a theoretical and practical approach to narrative nonfiction forms as documentary, reporting, essay, educational formats and institutional films in order to supply a picture of its current position in the media ecosystem. Keywords: Non-fiction, Audiovisual Narrative, Interactive Narrative, Transmedia Narrative.

  3. Audio-visual training-aid for speechreading

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich; Gebert, H.

    2011-01-01

    People with decreasing hearing ability are more dependent on alternative personal communication channels. To ‘read and understand’ visible articulatory movements of the conversation partner, as done in the process of speechreading, is one possible solution for understanding verbal statements...... on the employment of computer‐based communication aids for hearing‐impaired, deaf and deaf‐blind people [6]. This paper presents the complete system that is composed of a 3D‐facial animation with synchronized speech synthesis, a natural language dialogue unit and a student‐teacher‐training module. Due to the very...... modular structure of the software package and the centralized event manager, it is possible to add or replace specific modules when needed. The present version of our teacher‐student module uses a hierarchically structured composition of important single words and short phrases, supplemented by easy...

  4. Functional Imaging of Audio-Visual Selective Attention in Monkeys and Humans: How do Lapses in Monkey Performance Affect Cross-Species Correspondences?

    Science.gov (United States)

    Rinne, Teemu; Muers, Ross S; Salo, Emma; Slater, Heather; Petkov, Christopher I

    2017-06-01

    The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio-visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio-visual selective attention modulates the primate brain, identify sources for "lost" attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. © The Author 2017. Published by Oxford University Press.

  5. The audiovisual mounting narrative as a basis for the documentary film interactive: news studies

    Directory of Open Access Journals (Sweden)

    Mgs. Denis Porto Renó

    2008-01-01

    Full Text Available This paper presents a literature review and experiment results from pilot-doctoral research "assembly language visual narrative for the documentary film interactive," which defend the thesis that there are features interactive audio and video editing of the movie, even as causing agent of interactivity. The search for interactive audio-visual formats are present in international investigations, but sob glances technology. He believes that this paper is to propose possible formats for interactive audiovisual production film, video, television, computer and cell phone from the postmodern society. Key words: Audiovisual, language, interactivity, cinema interactive, documentary, communication.

  6. Comparison of audio and audiovisual measures of adult stuttering: Implications for clinical trials.

    Science.gov (United States)

    O'Brian, Sue; Jones, Mark; Onslow, Mark; Packman, Ann; Menzies, Ross; Lowe, Robyn

    2015-04-15

    This study investigated whether measures of percentage syllables stuttered (%SS) and stuttering severity ratings with a 9-point scale differ when made from audiovisual compared with audio-only recordings. Four experienced speech-language pathologists measured %SS and assigned stuttering severity ratings to 10-minute audiovisual and audio-only recordings of 36 adults. There was a mean 18% increase in %SS scores when samples were presented in audiovisual compared with audio-only mode. This result was consistent across both higher and lower %SS scores and was found to be directly attributable to counts of stuttered syllables rather than the total number of syllables. There was no significant difference between stuttering severity ratings made from the two modes. In clinical trials research, when using %SS as the primary outcome measure, audiovisual samples would be preferred as long as clear, good quality, front-on images can be easily captured. Alternatively, stuttering severity ratings may be a more valid measure to use as they correlate well with %SS and values are not influenced by the presentation mode.

  7. Audiovisual associations alter the perception of low-level visual motion

    Directory of Open Access Journals (Sweden)

    Hulusi eKafaligonul

    2015-03-01

    Full Text Available Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.

  8. Propuestas para la investigavción en comunicación audiovisual: publicidad social y creación colectiva en Internet / Research proposals for audiovisual communication: social advertising and collective creation on the internet

    Directory of Open Access Journals (Sweden)

    Teresa Fraile Prieto

    2011-09-01

    Full Text Available Resumen: La sociedad de la información digital plantea nuevos retos a los investigadores. A mediada que la comunicación audiovisual se ha consolidado como disciplina, los estudios culturales se muestran como una perspectiva de análisis ventajosa para acercarse a las nuevas prácticas creativas y de consumo del medio audiovisual. Este artículo defiende el estudio de los productos culturales audiovisuales que esta sociedad digital produce por cuanto son un testimonio de los cambios sociales que se operan en ella. En concreto se propone el acercamiento a la publicidad social y a los objetos de creación colectiva en Internet como medio para conocer las circunstancias de nuestra sociedad. Abstract: The information society poses new challenges to researchers. While audiovisual communication has been consolidated as a discipline, cultural studies is an advantageous analytical perspective to approach the new creative practices and consumption of audiovisual media. This article defends the study of audiovisual cultural products produced by the digital society because they are a testimony of the social changes taking place in it. Specifically, it proposes an approach to social advertising and objects of collective creation on the Internet as a means to know the circumstances of our society.

  9. Audiovisual laughter detection based on temporal features

    NARCIS (Netherlands)

    Petridis, Stavros; Nijholt, Antinus; Nijholt, A.; Pantic, M.; Pantic, Maja; Poel, Mannes; Poel, M.; Hondorp, G.H.W.

    2008-01-01

    Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audiovisual approach to distinguishing laughter from speech based on temporal features and we show that the integration of audio and visual information leads to improved

  10. Audio-visual materials usage preference among agricultural ...

    African Journals Online (AJOL)

    It was found that respondents preferred radio, television, poster, advert, photographs, specimen, bulletin, magazine, cinema, videotape, chalkboard, and bulletin board as audio-visual materials for extension work. These are the materials that can easily be manipulated and utilized for extension work. Nigerian Journal of ...

  11. Voice activity detection using audio-visual information

    DEFF Research Database (Denmark)

    Petsatodis, Theodore; Pnevmatikakis, Aristodemos; Boukis, Christos

    2009-01-01

    An audio-visual voice activity detector that uses sensors positioned distantly from the speaker is presented. Its constituting unimodal detectors are based on the modeling of the temporal variation of audio and visual features using Hidden Markov Models; their outcomes are fused using a post...

  12. Brain responses to audiovisual speech mismatch in infants are associated with individual differences in looking behaviour.

    Science.gov (United States)

    Kushnerenko, Elena; Tomalski, Przemyslaw; Ballieux, Haiko; Ribeiro, Helena; Potton, Anita; Axelsson, Emma L; Murphy, Elizabeth; Moore, Derek G

    2013-11-01

    Research on audiovisual speech integration has reported high levels of individual variability, especially among young infants. In the present study we tested the hypothesis that this variability results from individual differences in the maturation of audiovisual speech processing during infancy. A developmental shift in selective attention to audiovisual speech has been demonstrated between 6 and 9 months with an increase in the time spent looking to articulating mouths as compared to eyes (Lewkowicz & Hansen-Tift. (2012) Proc. Natl Acad. Sci. USA, 109, 1431-1436; Tomalski et al. (2012) Eur. J. Dev. Psychol., 1-14). In the present study we tested whether these changes in behavioural maturational level are associated with differences in brain responses to audiovisual speech across this age range. We measured high-density event-related potentials (ERPs) in response to videos of audiovisually matching and mismatched syllables /ba/ and /ga/, and subsequently examined visual scanning of the same stimuli with eye-tracking. There were no clear age-specific changes in ERPs, but the amplitude of audiovisual mismatch response (AVMMR) to the combination of visual /ba/ and auditory /ga/ was strongly negatively associated with looking time to the mouth in the same condition. These results have significant implications for our understanding of individual differences in neural signatures of audiovisual speech processing in infants, suggesting that they are not strictly related to chronological age but instead associated with the maturation of looking behaviour, and develop at individual rates in the second half of the first year of life. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Audiovisual Webjournalism: An analysis of news on UOL News and on TV UERJ Online

    Directory of Open Access Journals (Sweden)

    Leila Nogueira

    2008-06-01

    Full Text Available This work shows the development of audiovisual webjournalism on the Brazilian Internet. This paper, based on the analysis of UOL News on UOL TV – pioneer format on commercial web television - and of UERJ Online TV – first on-line university television in Brazil - investigates the changes in the gathering, production and dissemination processes of audiovisual news when it starts to be transmitted through the web. Reflections of authors such as Herreros (2003, Manovich (2001 and Gosciola (2003 are used to discuss the construction of audiovisual narrative on the web. To comprehend the current changes in today’s webjournalism, we draw on the concepts developed by Fidler (1997; Bolter and Grusin (1998; Machado (2000; Mattos (2002 and Palacios (2003. We may conclude that the organization of narrative elements in cyberspace makes for the efficiency of journalistic messages, while establishing the basis of a particular language for audiovisual news on the Internet.

  14. Audio/visual analysis for high-speed TV advertisement detection from MPEG bitstream

    OpenAIRE

    Sadlier, David A.

    2002-01-01

    Advertisement breaks dunng or between television programmes are typically flagged by senes of black-and-silent video frames, which recurrendy occur in order to audio-visually separate individual advertisement spots from one another. It is the regular prevalence of these flags that enables automatic differentiauon between what is programme content and what is advertisement break. Detection of these audio-visual depressions within broadcast television content provides a basis on which advertise...

  15. Functionally segregated neural substrates for arbitrary audiovisual paired-association learning.

    Science.gov (United States)

    Tanabe, Hiroki C; Honda, Manabu; Sadato, Norihiro

    2005-07-06

    To clarify the neural substrates and their dynamics during crossmodal association learning, we conducted functional magnetic resonance imaging (MRI) during audiovisual paired-association learning of delayed matching-to-sample tasks. Thirty subjects were involved in the study; 15 performed an audiovisual paired-association learning task, and the remainder completed a control visuo-visual task. Each trial consisted of the successive presentation of a pair of stimuli. Subjects were asked to identify predefined audiovisual or visuo-visual pairs by trial and error. Feedback for each trial was given regardless of whether the response was correct or incorrect. During the delay period, several areas showed an increase in the MRI signal as learning proceeded: crossmodal activity increased in unimodal areas corresponding to visual or auditory areas, and polymodal responses increased in the occipitotemporal junction and parahippocampal gyrus. This pattern was not observed in the visuo-visual intramodal paired-association learning task, suggesting that crossmodal associations might be formed by binding unimodal sensory areas via polymodal regions. In both the audiovisual and visuo-visual tasks, the MRI signal in the superior temporal sulcus (STS) in response to the second stimulus and feedback peaked during the early phase of learning and then decreased, indicating that the STS might be key to the creation of paired associations, regardless of stimulus type. In contrast to the activity changes in the regions discussed above, there was constant activity in the frontoparietal circuit during the delay period in both tasks, implying that the neural substrates for the formation and storage of paired associates are distinct from working memory circuits.

  16. An audiovisual emotion recognition system

    Science.gov (United States)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  17. Audiovisual focus of attention and its application to Ultra High Definition video compression

    Science.gov (United States)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  18. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  19. A linguagem audiovisual da lousa digital interativa no contexto educacional/Audiovisual language of the digital interactive whiteboard in the educational environment

    Directory of Open Access Journals (Sweden)

    Rosária Helena Ruiz Nakashima

    2006-01-01

    Full Text Available Neste artigo serão apresentadas informações sobre a lousa digital como um instrumento que proporciona a inserção da linguagem audiovisual no contexto escolar. Para o funcionamento da lousa digital interativa é necessário que esteja conectada a um computador e este a um projetor multimídia, sendo que, através da tecnologia Digital Vision Touch (DViT, a superfície desse quadro torna-se sensível ao toque. Dessa forma, utilizando-se o dedo, professores e alunos executarão funções que aumentam a interatividade com as atividades propostas na lousa. Serão apresentadas duas possibilidades de atividades pedagógicas, destacando as áreas do conhecimento de Ciências e Língua Portuguesa, que poderão ser aplicadas na educação infantil, com alunos de cinco a seis anos. Essa tecnologia reflete a evolução de um tipo de linguagem que não é mais baseada somente na oralidade e na escrita, mas também é audiovisual e dinâmica, pois permite que o sujeito além de receptor, seja produtor de informações. Portanto, a escola deve aproveitar esses recursos tecnológicos que facilitam o trabalho com a linguagem audiovisual em sala de aula, permitindo a elaboração de aulas mais significativas e inovadoras.In this paper we present some information about the digital interactive whiteboard and its use as a tool to introduce the audiovisual language in the educational environment. The digital interactive whiteboard is connected to both a computer and a multimedia projector and it uses the Digital Vision Touch (DViT, which means that the screen is touch-sensitive. By touching with their fingers, both teachers and pupils have access to functionalities that increase the interactivity with the activities worked during the class. We present two pedagogical activities to be used in Science and Portuguese classes, for five- and six-years old pupils. This new technology is the result of the evolution of a new type of communication, which is not grounded

  20. Early and late beta-band power reflect audiovisual perception in the McGurk illusion.

    Science.gov (United States)

    Roa Romero, Yadira; Senkowski, Daniel; Keil, Julian

    2015-04-01

    The McGurk illusion is a prominent example of audiovisual speech perception and the influence that visual stimuli can have on auditory perception. In this illusion, a visual speech stimulus influences the perception of an incongruent auditory stimulus, resulting in a fused novel percept. In this high-density electroencephalography (EEG) study, we were interested in the neural signatures of the subjective percept of the McGurk illusion as a phenomenon of speech-specific multisensory integration. Therefore, we examined the role of cortical oscillations and event-related responses in the perception of congruent and incongruent audiovisual speech. We compared the cortical activity elicited by objectively congruent syllables with incongruent audiovisual stimuli. Importantly, the latter elicited a subjectively congruent percept: the McGurk illusion. We found that early event-related responses (N1) to audiovisual stimuli were reduced during the perception of the McGurk illusion compared with congruent stimuli. Most interestingly, our study showed a stronger poststimulus suppression of beta-band power (13-30 Hz) at short (0-500 ms) and long (500-800 ms) latencies during the perception of the McGurk illusion compared with congruent stimuli. Our study demonstrates that auditory perception is influenced by visual context and that the subsequent formation of a McGurk illusion requires stronger audiovisual integration even at early processing stages. Our results provide evidence that beta-band suppression at early stages reflects stronger stimulus processing in the McGurk illusion. Moreover, stronger late beta-band suppression in McGurk illusion indicates the resolution of incongruent physical audiovisual input and the formation of a coherent, illusory multisensory percept. Copyright © 2015 the American Physiological Society.

  1. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    Directory of Open Access Journals (Sweden)

    Yuanqing Li

    Full Text Available One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG. The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  2. Planning and Producing Audiovisual Materials. Third Edition.

    Science.gov (United States)

    Kemp, Jerrold E.

    A revised edition of this handbook provides illustrated, step-by-step explanations of how to plan and produce audiovisual materials. Included are sections on the fundamental skills--photography, graphics and recording sound--followed by individual sections on photographic print series, slide series, filmstrips, tape recordings, overhead…

  3. Content-based analysis improves audiovisual archive retrieval

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2012-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. In this paper, we take into account the information needs

  4. Quantifying temporal ventriloquism in audiovisual synchrony perception

    NARCIS (Netherlands)

    Kuling, I.A.; Kohlrausch, A.G.; Juola, J.F.

    2013-01-01

    The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from

  5. 36 CFR 1256.100 - What is the copying policy for USIA audiovisual records that either have copyright protection or...

    Science.gov (United States)

    2010-07-01

    ... for USIA audiovisual records that either have copyright protection or contain copyrighted material... Distribution of United States Information Agency Audiovisual Materials in the National Archives of the United States § 1256.100 What is the copying policy for USIA audiovisual records that either have copyright...

  6. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study.

    Science.gov (United States)

    Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence

    2017-09-25

    At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Instrumentation maintenance

    International Nuclear Information System (INIS)

    Mack, D.A.

    1976-09-01

    It is essential to any research activity that accurate and efficient measurements be made for the experimental parameters under consideration for each individual experiment or test. Satisfactory measurements in turn depend upon having the necessary instruments and the capability of ensuring that they are performing within their intended specifications. This latter requirement can only be achieved by providing an adequate maintenance facility, staffed with personnel competent to understand the problems associated with instrument adjustment and repair. The Instrument Repair Shop at the Lawrence Berkeley Laboratory is designed to achieve this end. The organization, staffing and operation of this system is discussed. Maintenance policy should be based on studies of (1) preventive vs. catastrophic maintenance, (2) records indicating when equipment should be replaced rather than repaired and (3) priorities established to indicate the order in which equipment should be repaired. Upon establishing a workable maintenance policy, the staff should be instructed so that they may provide appropriate scheduled preventive maintenance, calibration and corrective procedures, and emergency repairs. The education, training and experience of the maintenance staff is discussed along with the organization for an efficient operation. The layout of the various repair shops is described in the light of laboratory space and financial constraints

  8. Selective Attention and Audiovisual Integration: Is Attending to Both Modalities a Prerequisite for Early Integration?

    NARCIS (Netherlands)

    Talsma, D.; Doty, Tracy J.; Woldorff, Marty G.

    2007-01-01

    Interactions between multisensory integration and attention were studied using a combined audiovisual streaming design and a rapid serial visual presentation paradigm. Event-related potentials (ERPs) following audiovisual objects (AV) were compared with the sum of the ERPs following auditory (A) and

  9. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate...... vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did...... not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension...

  10. A Similarity-Based Approach for Audiovisual Document Classification Using Temporal Relation Analysis

    Directory of Open Access Journals (Sweden)

    Ferrane Isabelle

    2011-01-01

    Full Text Available Abstract We propose a novel approach for video classification that bases on the analysis of the temporal relationships between the basic events in audiovisual documents. Starting from basic segmentation results, we define a new representation method that is called Temporal Relation Matrix (TRM. Each document is then described by a set of TRMs, the analysis of which makes events of a higher level stand out. This representation has been first designed to analyze any audiovisual document in order to find events that may well characterize its content and its structure. The aim of this work is to use this representation to compute a similarity measure between two documents. Approaches for audiovisual documents classification are presented and discussed. Experimentations are done on a set of 242 video documents and the results show the efficiency of our proposals.

  11. Conditioning Influences Audio-Visual Integration by Increasing Sound Saliency

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    2011-10-01

    Full Text Available We investigated the effect of prior conditioning of an auditory stimulus on audiovisual integration in a series of four psychophysical experiments. The experiments factorially manipulated the conditioning procedure (picture vs monetary conditioning and multisensory paradigm (2AFC visual detection vs redundant target paradigm. In the conditioning sessions, subjects were presented with three pure tones (= conditioned stimulus, CS that were paired with neutral, positive, or negative unconditioned stimuli (US, monetary: +50 euro cents,.–50 cents, 0 cents; pictures: highly pleasant, unpleasant, and neutral IAPS. In a 2AFC visual selective attention paradigm, detection of near-threshold Gabors was improved by concurrent sounds that had previously been paired with a positive (monetary or negative (picture outcome relative to neutral sounds. In the redundant target paradigm, sounds previously paired with positive (monetary or negative (picture outcomes increased response speed to both auditory and audiovisual targets similarly. Importantly, prior conditioning did not increase the multisensory response facilitation (ie, (A + V/2 – AV or the race model violation. Collectively, our results suggest that prior conditioning primarily increases the saliency of the auditory stimulus per se rather than influencing audiovisual integration directly. In turn, conditioned sounds are rendered more potent for increasing response accuracy or speed in detection of visual targets.

  12. Effects of audio-visual aids on foreign language test anxiety, reading and listening comprehension, and retention in EFL learners.

    Science.gov (United States)

    Lee, Shu-Ping; Lee, Shin-Da; Liao, Yuan-Lin; Wang, An-Chi

    2015-04-01

    This study examined the effects of audio-visual aids on anxiety, comprehension test scores, and retention in reading and listening to short stories in English as a Foreign Language (EFL) classrooms. Reading and listening tests, general and test anxiety, and retention were measured in English-major college students in an experimental group with audio-visual aids (n=83) and a control group without audio-visual aids (n=94) with similar general English proficiency. Lower reading test anxiety, unchanged reading comprehension scores, and better reading short-term and long-term retention after four weeks were evident in the audiovisual group relative to the control group. In addition, lower listening test anxiety, higher listening comprehension scores, and unchanged short-term and long-term retention were found in the audiovisual group relative to the control group after the intervention. Audio-visual aids may help to reduce EFL learners' listening test anxiety and enhance their listening comprehension scores without facilitating retention of such materials. Although audio-visual aids did not increase reading comprehension scores, they helped reduce EFL learners' reading test anxiety and facilitated retention of reading materials.

  13. Study on Maintenance Personnel Development Plan For The Exported APR1400 Commissioning

    International Nuclear Information System (INIS)

    Cho, Sungbae; Kim, Jongdae; Jun, Hokwang; Hwang, Inok; Kang, Jaeyuel

    2012-01-01

    This paper indicates ways to develop maintenance personnel for the exported APR1400 commissioning. The exported APR1400 has not been experienced ay maintenance yet, and requirements for maintenance personnel have not been clarified yet. Based on sound maintenance experience, KEPCO Plant Service and Engineering Company (KEPCO KPS) has studied on maintenance training and career requirement to establish a development plan of the maintenance personnel for the exported nuclear power plant. By defining manpower and training requirement, and mobilization plan, we expect to secure reliability of the exported APR1400

  14. 46 CFR 109.213 - Emergency training and drills.

    Science.gov (United States)

    2010-10-01

    ... to each person on board the unit. If audiovisual training aids are used, they must be incorporated... month. (3) Drills must be held before the unit enters service for the first time after modification of a... communication system, and ensuring that all on board are made aware of the order to abandon ship. (ii) Each...

  15. Market potential for interactive audio-visual media

    NARCIS (Netherlands)

    Leurdijk, A.; Limonard, S.

    2005-01-01

    NM2 (New Media for a New Millennium) develops tools for interactive, personalised and non-linear audio-visual content that will be tested in seven pilot productions. This paper looks at the market potential for these productions from a technological, a business and a users' perspective. It shows

  16. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  17. Development of Simulator Maintenance Engineer Qualification Program Draft

    International Nuclear Information System (INIS)

    Chung, Kyung Hun

    2010-01-01

    As of 2009, KHNP has currently seven full scope simulators that are used for training of Nuclear Power Plant (NPP) Operators. Well-trained Simulator Maintenance Engineers (SME) are required to support these simulators. These SMEs will maintain and address any issues identified or any changes required for keep up the simulator with their respective plant sites. These issues will be identified as Simulator Discrepancy Reports (DR) or Work Order (WO) by the simulator operation personnel in KHNP. The simulator maintenance is a very complex. The simulator consists of many areas of process and requires experts in software modeling for different processes such as Neutronics, thermohydraulics, Logics, control, Electrical systems and computer systems as well as hardware subjects such as I and C, I/O, computers, etc. All these areas need experts the subject expertise need to be divided among SME's. In other word the SME's need to be trained for different expertise as well as having different level of SME's. KHNP has seen the need to outsource the maintenance work for these complex simulators. To have one company concentrating on this work will have many benefits such as: · Provides proper and well trained experts · Maintains consistent support personnel · Maintains the maintenance history for the simulator · Coordinates and Maintains the knowledge in house · The simulator maintenance will be consistent In order to accomplish the goals, KEPCO RI has recognized that there is a need for a program to adequately train and qualify the SME's. KEPCO RI and GSE, which has provided 6 simulators among 7 NPP simulators in Korea, have jointly developed this Simulator Maintenance Engineer Qualification Program (SMEQP). After issue of this plan, KEPCO RI will maintain and modify as needed periodically to meet the goals and purpose of the plan

  18. Audiovisual preconditioning enhances the efficacy of an anatomical dissection course: A randomised study.

    Science.gov (United States)

    Collins, Anne M; Quinlan, Christine S; Dolan, Roisin T; O'Neill, Shane P; Tierney, Paul; Cronin, Kevin J; Ridgway, Paul F

    2015-07-01

    The benefits of incorporating audiovisual materials into learning are well recognised. The outcome of integrating such a modality in to anatomical education has not been reported previously. The aim of this randomised study was to determine whether audiovisual preconditioning is a useful adjunct to learning at an upper limb dissection course. Prior to instruction participants completed a standardised pre course multiple-choice questionnaire (MCQ). The intervention group was subsequently shown a video with a pre-recorded commentary. Following initial dissection, both groups completed a second MCQ. The final MCQ was completed at the conclusion of the course. Statistical analysis confirmed a significant improvement in the performance in both groups over the duration of the three MCQs. The intervention group significantly outperformed their control group counterparts immediately following audiovisual preconditioning and in the post course MCQ. Audiovisual preconditioning is a practical and effective tool that should be incorporated in to future course curricula to optimise learning. Level of evidence This study appraises an intervention in medical education. Kirkpatrick Level 2b (modification of knowledge). Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  20. On-line repository of audiovisual material feminist research methodology

    Directory of Open Access Journals (Sweden)

    Lena Prado

    2014-12-01

    Full Text Available This paper includes a collection of audiovisual material available in the repository of the Interdisciplinary Seminar of Feminist Research Methodology SIMReF (http://www.simref.net.

  1. PFP MICON maintenance manual. Revision 1

    International Nuclear Information System (INIS)

    Silvan, G.R.

    1995-01-01

    This manual covers the use of maintenance displays, maintenance procedures, system alarms and common system failures. This manual is intended to supplement the MICON maintenance training not replace it. It also assumes that the user is familiar with the normal operation of the MICON A/S system. The MICON system is a distributed control computer and, among other things, controls the HVAC system for the Plutonium Finishing Plant

  2. Net neutrality and audiovisual services

    OpenAIRE

    van Eijk, N.; Nikoltchev, S.

    2011-01-01

    Net neutrality is high on the European agenda. New regulations for the communication sector provide a legal framework for net neutrality and need to be implemented on both a European and a national level. The key element is not just about blocking or slowing down traffic across communication networks: the control over the distribution of audiovisual services constitutes a vital part of the problem. In this contribution, the phenomenon of net neutrality is described first. Next, the European a...

  3. Audiovisual integration of speech in a patient with Broca's Aphasia

    Science.gov (United States)

    Andersen, Tobias S.; Starrfelt, Randi

    2015-01-01

    Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia. PMID:25972819

  4. Electrophysiological evidence for speech-specific audiovisual integration

    NARCIS (Netherlands)

    Baart, M.; Stekelenburg, J.J.; Vroomen, J.

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were

  5. School Building Design and Audio-Visual Resources.

    Science.gov (United States)

    National Committee for Audio-Visual Aids in Education, London (England).

    The design of new schools should facilitate the use of audiovisual resources by ensuring that the materials used in the construction of the buildings provide adequate sound insulation and acoustical and viewing conditions in all learning spaces. The facilities to be considered are: electrical services; electronic services; light control and…

  6. Operation and maintenance of the technical installations in buildings

    DEFF Research Database (Denmark)

    Nielsen, O.(red.)

    The report contains twelve papers from a seminar on operation and maintenance, held at the Danish Building Research Institute in October 1976. The papers deal, among other things, with dimensioning and balancing of pipesystems, design of ventilating systems for adequate operation and maintenance,......, cost and quality in maintenance, maintenance service companies, as well as organization and training for building services maintenance....

  7. Iniciativas e ações feministas no audiovisual brasileiro contemporâneo

    Directory of Open Access Journals (Sweden)

    Marina Cavalcanti Tedesco

    2017-10-01

    Full Text Available É possível afirmar que nos últimos dois anos a palavra feminismo adquiriu um novo peso, conquistando um espaço significativo nas redes sociais, na mídia e nas ruas. O audiovisual foi uma das áreas que acompanhou esta ascensão recente do feminismo, o que se materializou através de uma série de iniciativas focadas em reivindicar direitos e discutir o machismo no mercado de trabalho. Neste artigo pretendemos, sem nenhuma pretensão de esgotar o tema, apresentar e refletir sobre oito iniciativas que consideramos emblemáticas dessa intersecção contemporânea entre feminismo e cinema: Mulher no Cinema, Mulheres do Audiovisual Brasil, Mulheres Negras no Audiovisual Brasileiro, Cabíria Prêmio de Roteiro, Eparrêi Filmes, Academia das Musas, Cineclube Delas e o FINCAR – Festival Internacional de Cinema de Realizadoras.

  8. Maintenance of nuclear power plants

    International Nuclear Information System (INIS)

    1982-01-01

    This Guide covers the organizational and procedural aspects of maintenance but does not give detailed technical advice on the maintenance of particular plant items. It gives guidance on preventive and remedial measures necessary to ensure that all structures, systems and components important to safety are capable of performing as intended. The Guide covers the organizational and administrative requirements for establishing and implementing preventive maintenance schedules, repairing defective plant items, providing maintenance facilities and equipment, procuring stores and spare parts, selecting and training maintenance personnel, reviewing and controlling plant modifications arising from maintenance, and for generating, collecting and retaining maintenance records. Maintenance shall be subject to quality assurance in all aspects important to safety. Because quality assurance has been dealt with in detail in other Safety Guides, it is only included here in specific instances where emphasis is required. Maintenance is considered to include functional and performance testing of plant, surveillance and in-service inspection, where these are necessary either to support other maintenance activities or to ensure continuing capability of structures, systems and components important to safety to perform their intended functions

  9. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    Science.gov (United States)

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue

  10. Text-to-audiovisual speech synthesizer for children with learning disabilities.

    Science.gov (United States)

    Mendi, Engin; Bayrak, Coskun

    2013-01-01

    Learning disabilities affect the ability of children to learn, despite their having normal intelligence. Assistive tools can highly increase functional capabilities of children with learning disorders such as writing, reading, or listening. In this article, we describe a text-to-audiovisual synthesizer that can serve as an assistive tool for such children. The system automatically converts an input text to audiovisual speech, providing synchronization of the head, eye, and lip movements of the three-dimensional face model with appropriate facial expressions and word flow of the text. The proposed system can enhance speech perception and help children having learning deficits to improve their chances of success.

  11. Computationally efficient clustering of audio-visual meeting data

    NARCIS (Netherlands)

    Hung, H.; Friedland, G.; Yeo, C.; Shao, L.; Shan, C.; Luo, J.; Etoh, M.

    2010-01-01

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors,

  12. Sincronía entre formas sonoras y formas visuales en la narrativa audiovisual

    Directory of Open Access Journals (Sweden)

    Lic. José Alfredo Sánchez Ríos

    1999-01-01

    Full Text Available ¿Dónde tiene que situarse el investigador para realizar un trabajo que lleve consigo un conocimiento más profundo para entender un fenómeno tan próximo y tan complejo como es la comunicación audiovisual que usa sonido e imagen a la vez? ¿Cuál es el papel del investigador en comunicación audiovisual para aportar nuevas aproximaciones en torno a su objeto de estudio? Desde esta perspectiva, pensamos que la nueva tarea del investigador en comunicación audiovisual será hacer una teoría menos interpretativa-subjetiva y encaminar sus observaciones hacia conocimientos segmentados que puedan ser demostrables, repetibles y autocuestionables, es decir, estudiar, elaborar y construir una teoría con un mayor y nuevo rigor metodológico.

  13. Nuclear instrument maintenance - problems, solutions, and obstacles

    International Nuclear Information System (INIS)

    Vuister, P.H.

    1983-01-01

    In 200 laboratories of South-East Asia, Latin America and Africa a survey was made of the state of instrumentation for nuclear medicine. The principal cause of failures and defects was inadequate quality control and preventive maintenance. On the basis of the survey coordinated research programs were compiled for the maintenance of nuclear instruments. The four principal points of the programs are: to safeguard quality and stable electric power supplies for the instruments, to safeguard permanent temperature and humidity in the environment in which the equipment is operated, effective maintenance, and training of personnel. In the years 1981 and 1982, 14 local training courses were run in which emphasis was put on practicals and tests in mechanics and electronics

  14. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    Science.gov (United States)

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  15. Optimal Audiovisual Integration in the Ventriloquism Effect But Pervasive Deficits in Unisensory Spatial Localization in Amblyopia.

    Science.gov (United States)

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-01-01

    Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.

  16. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    Science.gov (United States)

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  17. Audiovisual en línea en la universidad española: bibliotecas y servicios especializados (una panorámica

    Directory of Open Access Journals (Sweden)

    Alfonso López Yepes

    2014-08-01

    Full Text Available Situación que presenta la información audiovisual en línea en el ámbito de las bibliotecas y servicios audiovisuales universitarios españoles, con ejemplos de aplicaciones y desarrollos concretos. Se destaca la presencia del audiovisual fundamentalmente en blogs, canales IPTV, portales bibliotecarios propios y en actuaciones concretas como “La Universidad Responde”, a cargo de los servicios audiovisuales de las universidades españolas, que supone sin duda un marco de referencia y de difusión informativa muy destacado también para el ámbito bibliotecario; así como en redes sociales, mencionándose una propuesta de modelo de red social de biblioteca universitaria. Se remite a la participación de bibliotecas y servicios en proyectos colaborativos de investigación y desarrollo social, presencia ya efectiva en el marco del proyecto “Red iberoamericana de patrimonio sonoro y audiovisual”, que apuesta  por la construcción social del conocimiento audiovisual basado en la interacción entre distintos grupos multidisciplinarios de profesionales con diferentes comunidades de usuarios e instituciones.A situation presenting audiovisual information online in the field of libraries and audiovisual university spanish services, with examples of applications and specific developments. The presence of the audiovisual in blogs and IPTV channels librarians and specific actions as The University Respond, in charge of the audiovisual services of the spanish universities, a very important reference and information dissemination for the field librarian and in social networks, mentioning a model of social network of University Library. Participation of libraries and services in collaborative research and social development projects in the Ibero-American network of sound and audiovisual heritage project, for the social construction of the audiovisual knowledge based on the interaction between various multidisciplinary groups of professionals with

  18. Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2012-12-01

    Full Text Available Audio-visual speech recognition is a natural and robust approach to improving human-robot interaction in noisy environments. Although multi-stream Dynamic Bayesian Network and coupled HMM are widely used for audio-visual speech recognition, they fail to learn the shared features between modalities and ignore the dependency of features among the frames within each discrete state. In this paper, we propose a Deep Dynamic Bayesian Network (DDBN to perform unsupervised extraction of spatial-temporal multimodal features from Tibetan audio-visual speech data and build an accurate audio-visual speech recognition model under a no frame-independency assumption. The experiment results on Tibetan speech data from some real-world environments showed the proposed DDBN outperforms the state-of-art methods in word recognition accuracy.

  19. Proper Use of Audio-Visual Aids: Essential for Educators.

    Science.gov (United States)

    Dejardin, Conrad

    1989-01-01

    Criticizes educators as the worst users of audio-visual aids and among the worst public speakers. Offers guidelines for the proper use of an overhead projector and the development of transparencies. (DMM)

  20. EFFECTIVE WAYS OF POSTGRADUATE PEDAGOGICAL EDUCATION INSTITUTES TEACHERS’ TRAINING

    Directory of Open Access Journals (Sweden)

    Liudmyla V. Kalachova

    2013-12-01

    Full Text Available The article presents the results of comparative analysis of training for teachers of postgraduate pedagogical education institutes for various forms of training: full-time, full-time- distance and distance after the author's program "Teacher training of postgraduate pedagogical education institutes for use of audiovisual teaching aids." The comparison was done on such indicators as the number of participants who completed the training, the pace of learning, quality control test mastery of the material of the course, the qualitative and quantitative performance indicators of individual case studies. As a result, the article identifies the main advantages and disadvantages of each form of education and recommended the most effective form of in-service training of the teaching load.

  1. Audiovisual News, Cartoons, and Films as Sources of Authentic Language Input and Language Proficiency Enhancement

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    In today's audiovisually driven world, various audiovisual programs can be incorporated as authentic sources of potential language input for second language acquisition. In line with this view, the present research aimed at discovering the effectiveness of exposure to news, cartoons, and films as three different types of authentic audiovisual…

  2. Users Requirements in Audiovisual Search: A Quantitative Approach

    NARCIS (Netherlands)

    Nadeem, Danish; Ordelman, Roeland J.F.; Aly, Robin; Verbruggen, Erwin; Aalberg, Trond; Papatheodorou, Christos; Dobreva, Milena; Tsakonas, Giannis; Farrugia, Charles J.

    2013-01-01

    This paper reports on the results of a quantitative analysis of user requirements for audiovisual search that allow the categorisation of requirements and to compare requirements across user groups. The categorisation provides clear directions with respect to the prioritisation of system features

  3. When Library and Archival Science Methods Converge and Diverge: KAUST’s Multi-Disciplinary Approach to the Management of its Audiovisual Heritage

    KAUST Repository

    Kenosi, Lekoko

    2015-07-16

    Libraries and Archives have long recognized the important role played by audiovisual records in the development of an informed global citizen and the King Abdullah University of Science and Technology (KAUST) is no exception. Lying on the banks of the Red Sea, KAUST has a state of the art library housing professional library and archives teams committed to the processing of digital audiovisual records created within and outside the University. This commitment, however, sometimes obscures the fundamental divergences unique to the two disciplines on the acquisition, cataloguing, access and long-term preservation of audiovisual records. This dichotomy is not isolated to KAUST but replicates itself in many settings that have employed Librarians and Archivists to manage their audiovisual collections. Using the KAUST audiovisual collections as a case study the authors of this paper will take the reader through the journey of managing KAUST’s digital audiovisual collection. Several theoretical and methodological areas of convergence and divergence will be highlighted as well as suggestions on the way forward for the IFLA and ICA working committees on the management of audiovisual records.

  4. Computationally Efficient Clustering of Audio-Visual Meeting Data

    Science.gov (United States)

    Hung, Hayley; Friedland, Gerald; Yeo, Chuohao

    This chapter presents novel computationally efficient algorithms to extract semantically meaningful acoustic and visual events related to each of the participants in a group discussion using the example of business meeting recordings. The recording setup involves relatively few audio-visual sensors, comprising a limited number of cameras and microphones. We first demonstrate computationally efficient algorithms that can identify who spoke and when, a problem in speech processing known as speaker diarization. We also extract visual activity features efficiently from MPEG4 video by taking advantage of the processing that was already done for video compression. Then, we present a method of associating the audio-visual data together so that the content of each participant can be managed individually. The methods presented in this article can be used as a principal component that enables many higher-level semantic analysis tasks needed in search, retrieval, and navigation.

  5. Does audiovisual distraction reduce dental anxiety in children under local anesthesia? A systematic review and meta-analysis.

    Science.gov (United States)

    Zhang, Cai; Qin, Dan; Shen, Lu; Ji, Ping; Wang, Jinhua

    2018-03-02

    To perform a systematic review and meta-analysis on the effects of audiovisual distraction on reducing dental anxiety in children during dental treatment under local anesthesia. The authors identified eligible reports published through August 2017 by searching PubMed, EMBASE, and Cochrane Central Register of Controlled Trials. Clinical trials that reported the effects of audiovisual distraction on children's physiological measures, self-reports and behavior rating scales during dental treatment met the minimum inclusion requirements. The authors extracted data and performed a meta-analysis of appropriate articles. Nine eligible trials were included and qualitatively analyzed; some of these trials were also quantitatively analyzed. Among the physiological measures, heart rate or pulse rate was significantly lower (p=0.01) in children subjected to audiovisual distraction during dental treatment under local anesthesia than in those who were not; a significant difference in oxygen saturation was not observed. The majority of the studies using self-reports and behavior rating scales suggested that audiovisual distraction was beneficial in reducing anxiety perception and improving children's cooperation during dental treatment. The audiovisual distraction approach effectively reduces dental anxiety among children. Therefore, we suggest the use of audiovisual distraction when children need dental treatment under local anesthesia. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  6. 36 CFR 1256.98 - Can I get access to and obtain copies of USIA audiovisual records transferred to the National...

    Science.gov (United States)

    2010-07-01

    ... obtain copies of USIA audiovisual records transferred to the National Archives of the United States? 1256... United States Information Agency Audiovisual Materials in the National Archives of the United States § 1256.98 Can I get access to and obtain copies of USIA audiovisual records transferred to the National...

  7. The KWS training power plant Zwentendorf. Optimal conditions for practical training in the sectors of maintenance and dismantling of nuclear power plants; Das KWS-Schulungskraftwerk Zwentendorf. Die ideale Voraussetzung fuer praktische Schulungen in den Bereichen Instandhaltung und Rueckbau von kerntechnischen Anlagen

    Energy Technology Data Exchange (ETDEWEB)

    Maassen, Herbert [KRAFTWERKSSCHULE E.V., Essen (Germany). Weiterbildung Instandhaltung fuer konventionelle-/kerntechnische Anlagen und erneuerbare Energien

    2014-06-15

    In consequence of several years of interbranch staff reduction, started middle of the 1990th, at producers of power plant installation engineering, at plant service companies as well as at the operators of power plants and nuclear power plants themselves, an area-wide decline in know-how took place, which put the safe performance of maintenance activities in nuclear power plants more and more into question. The search for adequate training possibilities to cover these deficits lead to the reorganization of the nuclear power plant Zwentendorf at the year 2002, which was changed into a training facility for maintenance trainings, particularly for the sectors of reactor service, decommissioning and dismantling of nuclear power plants and other types of power plant specific training measures. For this purpose Zwentendorf was upgraded and transformed within a long-time process, and its combination may be considered as unique throughout the world. The Kraftwerksschule e.V. (KWS) owns the exclusive rights for the performance of training measures at Zwentendorf. During the last 10 years the KWS has made almost all sectors of this nuclear power plant accessible for trainings and inspections and offers a large training program. It is the aim of the training measures to ensure the operational reliability of the mechanical and installation engineering of nuclear power plants as well as fossil fired power plants in the long term through optimized maintenance planning and performance and therefore to operate the plants safely. Because of the direct practical reference to the original mechanical and installation engineering in the real atmosphere of a power plant, the nuclear power plant of Zwentendorf is highly suitable as a training centre for staff training in theory and practice. (orig.)

  8. Education and training support system

    International Nuclear Information System (INIS)

    Kubota, Rhuji; Iyadomi, Motomi.

    1996-01-01

    In order to train the specialist such as operator or maintenance stuff of large scale plant such as nuclear power plant or thermal power plant, a high grade teaching and training support system is required as well as in training pilot of aeroplane. The specialist in such large scale plant is also a researcher in the field of machinery, electricity and physics at first, and is grown up a expert operator or maintenance stuff through learning of CAI system or OTJ used training material for teaching tool in addition of training used operating or maintenance training device imitating actual plant after acquiring determined knowledges by receiving fundamental education on nuclear and thermal power plants. In this paper, the teaching and training support systems of the nuclear and thermal power plants for a system supporting such teaching and training, respectively, were introduced. (G.K.)

  9. Audiovisual English-Arabic Translation: De Beaugrande's Perspective

    Directory of Open Access Journals (Sweden)

    Alaa Eddin Hussain

    2016-05-01

    Full Text Available This paper attempts to demonstrate the significance of the seven standards of textuality with special application to audiovisual English Arabic translation.  Ample and thoroughly analysed examples have been provided to help in audiovisual English-Arabic translation decision-making. A text is meaningful if and only if it carries meaning and knowledge to its audience, and is optimally activatable, recoverable and accessible.  The same is equally applicable to audiovisual translation (AVT. The latter should also carry knowledge which can be easily accessed by the TL audience, and be processed with least energy and time, i.e. achieving the utmost level of efficiency. Communication occurs only when that text is coherent, with continuity of senses and concepts that are appropriately linked. Coherence of a text will be achieved when all aspects of cohesive devices are well accounted for pragmatically.  This combined with a good amount of psycholinguistic element will provide a text with optimal communicative value. Non-text is certainly devoid of such components and ultimately non-communicative. Communicative knowledge can be classified into three categories: determinate knowledge, typical knowledge and accidental knowledge. To create dramatic suspense and the element of surprise, the text in AV environment, as in any dialogue, often carries accidental knowledge.  This unusual knowledge aims to make AV material interesting in the eyes of its audience. That cognitive environment is enhanced by an adequate employment of material (picture and sound, and helps to recover sense in the text. Hence, the premise of this paper is the application of certain aspects of these standards to AV texts taken from various recent feature films and documentaries, in order to facilitate the translating process and produce a final appropriate product.

  10. Audio-Visual Equipment Depreciation. RDU-75-07.

    Science.gov (United States)

    Drake, Miriam A.; Baker, Martha

    A study was conducted at Purdue University to gather operational and budgetary planning data for the Libraries and Audiovisual Center. The objectives were: (1) to complete a current inventory of equipment including year of purchase, costs, and salvage value; (2) to determine useful life data for general classes of equipment; and (3) to determine…

  11. Dissociating verbal and nonverbal audiovisual object processing.

    Science.gov (United States)

    Hocking, Julia; Price, Cathy J

    2009-02-01

    This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.

  12. Summarizing Audiovisual Contents of a Video Program

    Science.gov (United States)

    Gong, Yihong

    2003-12-01

    In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates/redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker's face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1) provides a natural visual and audio content overview, and (2) maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.

  13. 78 FR 48190 - Certain Audiovisual Components and Products Containing the Same Notice of Request for Statements...

    Science.gov (United States)

    2013-08-07

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-837] Certain Audiovisual Components and Products Containing the Same Notice of Request for Statements on the Public Interest AGENCY: U.S... infringing audiovisual components and products containing the same, imported by Funai Corporation, Inc. of...

  14. 36 CFR 1256.96 - What provisions apply to the transfer of USIA audiovisual records to the National Archives of the...

    Science.gov (United States)

    2010-07-01

    ... transfer of USIA audiovisual records to the National Archives of the United States? 1256.96 Section 1256.96... Information Agency Audiovisual Materials in the National Archives of the United States § 1256.96 What provisions apply to the transfer of USIA audiovisual records to the National Archives of the United States...

  15. Exposure to audiovisual programs as sources of authentic language ...

    African Journals Online (AJOL)

    Exposure to audiovisual programs as sources of authentic language input and second ... Southern African Linguistics and Applied Language Studies ... The findings of the present research contribute more insights on the type and amount of ...

  16. Infants' preference for native audiovisual speech dissociated from congruency preference.

    Directory of Open Access Journals (Sweden)

    Kathleen Shaw

    Full Text Available Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces. Across two experiments, we tested infants' sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English and non-native (Spanish language. In Experiment 1, infants' looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.

  17. Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability

    NARCIS (Netherlands)

    Francisco, A.A.; Groen, M.A.; Jesse, A.; McQueen, J.M.

    2017-01-01

    The aim of this study was to clarify whether audiovisual processing accounted for variance in reading and reading-related abilities, beyond the effect of a set of measures typically associated with individual differences in both reading and audiovisual processing. Testing adults with and without a

  18. ETNOGRAFÍA Y COMUNICACIÓN: EL PROYECTO ARCHIVO ETNOGRÁFICO AUDIOVISUAL DE LA UNIVERSIDAD DE CHILE

    Directory of Open Access Journals (Sweden)

    Mauricio Pineda Pertier

    2012-06-01

    This article considers audiovisual ethnography as a communication process, and takes the Audiovisual Ethnographic Archive of Universidad de Chile and its experience in the development of audiovisual ethnographies during the past eight years as a case of analysis. Beyond its use as a data recording technique, the construction and dissemination of messages with social content based on the aforementioned data records constitute a complex praxis of communication production that leads us to critically review the traditional conceptualization of the concept of communication. This work discusses these models, setting forth alternatives from an applied ethno-political perspective in local development contexts.

  19. Degradation of labial information modifies audiovisual speech perception in cochlear-implanted children.

    Science.gov (United States)

    Huyse, Aurélie; Berthommier, Frédéric; Leybaert, Jacqueline

    2013-01-01

    The aim of the present study was to examine audiovisual speech integration in cochlear-implanted children and in normally hearing children exposed to degraded auditory stimuli. Previous studies have shown that speech perception in cochlear-implanted users is biased toward the visual modality when audition and vision provide conflicting information. Our main question was whether an experimentally designed degradation of the visual speech cue would increase the importance of audition in the response pattern. The impact of auditory proficiency was also investigated. A group of 31 children with cochlear implants and a group of 31 normally hearing children matched for chronological age were recruited. All children with cochlear implants had profound congenital deafness and had used their implants for at least 2 years. Participants had to perform an /aCa/ consonant-identification task in which stimuli were presented randomly in three conditions: auditory only, visual only, and audiovisual (congruent and incongruent McGurk stimuli). In half of the experiment, the visual speech cue was normal; in the other half (visual reduction) a degraded visual signal was presented, aimed at preventing lipreading of good quality. The normally hearing children received a spectrally reduced speech signal (simulating the input delivered by the cochlear implant). First, performance in visual-only and in congruent audiovisual modalities were decreased, showing that the visual reduction technique used here was efficient at degrading lipreading. Second, in the incongruent audiovisual trials, visual reduction led to a major increase in the number of auditory based responses in both groups. Differences between proficient and nonproficient children were found in both groups, with nonproficient children's responses being more visual and less auditory than those of proficient children. Further analysis revealed that differences between visually clear and visually reduced conditions and between

  20. Plan de empresa de una productora audiovisual de nueva creación en la ciudad de Valencia

    OpenAIRE

    BARBA MUÑOZ, SARA

    2013-01-01

    [ES] El presente trabajo ha sido un recorrido acerca de la realización de un plan de empresa audiovisual situada en Valencia. Hemos ideado una empresa audiovisual especialmente dirigida a ofrecer sus productos a las medianas empresas. Hemos analizado el sector audiovisual como un ente en constante crecimiento dado su relación con las nuevas tecnologías; lo que le da categoría de un sector generador de empleo directo; especialmente en los jóvenes que en la actualidad es un...

  1. Increasing the effectiveness of instrumentation and control training programs using integrated training settings and a systematic approach to training

    International Nuclear Information System (INIS)

    McMahon, J.F.; Rakos, N.

    1992-01-01

    The performance of plant maintenance-related tasks assigned to instrumentation and control (I ampersand C) technicians can be broken down into physical skills required to do the task; resident knowledge of how to do the task; effect of maintenance on plant operating conditions; interactions with other plant organizations such as operations, radiation protection, and quality control; and knowledge of consequences of miss-action. A technician who has learned about the task in formal classroom presentations has not had the advantage of integrating that knowledge with the requisite physical and communication skills; hence, the first time these distinct and vital parts of the task equation are put together is on the job, during initial task performance. On-the-job training provides for the integration of skills and knowledge; however, this form of training is limited by plant conditions, availability of supporting players, and training experience levels of the personnel conducting the exercise. For licensed operations personnel, most nuclear utilities use formal classroom and a full-scope control room simulator to achieve the integration of skills and knowledge in a controlled training environment. TU Electric has taken that same approach into maintenance areas by including identical plant equipment in a laboratory setting for the large portion of training received by maintenance personnel at its Comanche Peak steam electric station. The policy of determining training needs and defining the scope of training by using the systematic approach to training has been highly effective and provided training at a reasonable cost (approximately $18.00/student contact hour)

  2. Balance maintenance as an acquired motor skill: Delayed gains and robust retention after a single session of training in a virtual environment.

    Science.gov (United States)

    Elion, Orit; Sela, Itamar; Bahat, Yotam; Siev-Ner, Itzhak; Weiss, Patrice L Tamar; Karni, Avi

    2015-06-03

    Does the learning of a balance and stability skill exhibit time-course phases and transfer limitations characteristic of the acquisition and consolidation of voluntary movement sequences? Here we followed the performance of young adults trained in maintaining balance while standing on a moving platform synchronized with a virtual reality road travel scene. The training protocol included eight 3 min long iterations of the road scene. Center of Pressure (CoP) displacements were analyzed for each task iteration within the training session, as well as during tests at 24h, 4 weeks and 12 weeks post-training to test for consolidation phase ("offline") gains and assess retention. In addition, CoP displacements in reaction to external perturbations were assessed before and after the training session and in the 3 subsequent post-training assessments (stability tests). There were significant reductions in CoP displacements as experience accumulated within session, with performance stabilizing by the end of the session. However, CoP displacements were further reduced at 24h post-training (delayed "offline" gains) and these gains were robustly retained. There was no transfer of the practice-related gains to performance in the stability tests. The time-course of learning the balance maintenance task, as well as the limitation on generalizing the gains to untrained conditions, are in line with the results of studies of manual movement skill learning. The current results support the conjecture that a similar repertoire of basic neuronal mechanisms of plasticity may underlay skill (procedural, "how to" knowledge) acquisition and skill memory consolidation in voluntary and balance maintenance tasks. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Energy consumption of audiovisual devices in the residential sector: Economic impact of harmonic losses

    International Nuclear Information System (INIS)

    Santiago, I.; López-Rodríguez, M.A.; Gil-de-Castro, A.; Moreno-Munoz, A.; Luna-Rodríguez, J.J.

    2013-01-01

    In this work, energy losses and the economic consequences of the use of small appliances containing power electronics (PE) in the Spanish residential sector were estimated. Audiovisual devices emit harmonics, originating in the distribution system an increment in wiring losses and a greater demand in the total apparent power. Time Use Surveys (2009–10) conducted by the National Statistical Institute in Spain were used to obtain information about the activities occurring in Spanish homes regarding the use of audiovisual equipment. Moreover, measurements of different types of household appliances available in the PANDA database were also utilized, and the active and non-active annual power demand of these residential-sector devices were determined. Although a single audiovisual device has an almost negligible contribution, the aggregated actions of this type of appliances, whose total annual energy demand is greater than 4000 GWh, can be significant enough to be taken into account in any energy efficiency program. It was proven that a reduction in the total harmonic distortion in the distribution systems ranging from 50% to 5% can reduce energy losses significantly, with economic savings of around several million Euros. - Highlights: • Time Use Survey provides information about Spanish household electricity consumption. • The annual aggregated energy demand of audiovisual appliances is very significant. • TV use accounts for more than 80% of household audiovisual electricity consumption. • A reduction from 50% to 5% in the total harmonic distortion would have economic savings of around several million Euros. • Stricter regulations regarding harmonic emissions must be demanded

  4. Decision-Level Fusion for Audio-Visual Laughter Detection

    NARCIS (Netherlands)

    Reuderink, B.; Poel, Mannes; Truong, Khiet Phuong; Poppe, Ronald Walter; Pantic, Maja; Popescu-Belis, Andrei; Stiefelhagen, Rainer

    Laughter is a highly variable signal, which can be caused by a spectrum of emotions. This makes the automatic detection of laugh- ter a challenging, but interesting task. We perform automatic laughter detection using audio-visual data from the AMI Meeting Corpus. Audio- visual laughter detection is

  5. Spatio-temporal patterns of event-related potentials related to audiovisual synchrony judgments in older adults.

    Science.gov (United States)

    Chan, Yu Man; Pianta, Michael Julian; Bode, Stefan; McKendrick, Allison Maree

    2017-07-01

    Older adults have altered perception of the relative timing between auditory and visual stimuli, even when stimuli are scaled to equate detectability. To help understand why, this study investigated the neural correlates of audiovisual synchrony judgments in older adults using electroencephalography (EEG). Fourteen younger (18-32 year old) and 16 older (61-74 year old) adults performed an audiovisual synchrony judgment task on flash-pip stimuli while EEG was recorded. All participants were assessed to have healthy vision and hearing for their age. Observers responded to whether audiovisual pairs were perceived as synchronous or asynchronous via a button press. The results showed that the onset of predictive sensory information for synchrony judgments was not different between groups. Channels over auditory areas contributed more to this predictive sensory information than visual areas. The spatial-temporal profile of the EEG activity also indicates that older adults used different resources to maintain a similar level of performance in audiovisual synchrony judgments compared with younger adults. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Categorization of natural dynamic audiovisual scenes.

    Directory of Open Access Journals (Sweden)

    Olli Rummukainen

    Full Text Available This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database.

  7. fMR-adaptation indicates selectivity to audiovisual content congruency in distributed clusters in human superior temporal cortex.

    Science.gov (United States)

    van Atteveldt, Nienke M; Blau, Vera C; Blomert, Leo; Goebel, Rainer

    2010-02-02

    Efficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI) studies propose the (posterior) superior temporal cortex (STC) as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent) versus nonmatching (incongruent) multisensory inputs. Here, we used fMR-adaptation (fMR-A) in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs) and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs). We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation. The results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency. These results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for the integration of letters and speech sounds and

  8. fMR-adaptation indicates selectivity to audiovisual content congruency in distributed clusters in human superior temporal cortex

    Directory of Open Access Journals (Sweden)

    Blomert Leo

    2010-02-01

    Full Text Available Abstract Background Efficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI studies propose the (posterior superior temporal cortex (STC as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent versus nonmatching (incongruent multisensory inputs. Here, we used fMR-adaptation (fMR-A in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs. We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation. Results The results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency. Conclusions These results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for

  9. Perception of the Multisensory Coherence of Fluent Audiovisual Speech in Infancy: Its Emergence & the Role of Experience

    Science.gov (United States)

    Lewkowicz, David J.; Minar, Nicholas J.; Tift, Amy H.; Brandon, Melissa

    2014-01-01

    To investigate the developmental emergence of the ability to perceive the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8–10, and 12–14 month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor the 8–10 month-old infants exhibited audio-visual matching in that neither group exhibited greater looking at the matching monologue. In contrast, the 12–14 month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, they perceived the multisensory coherence of native-language monologues earlier in the test trials than of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12–14 month olds did not depend on audio-visual synchrony whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audio-visual synchrony cues are more important in the perception of the multisensory coherence of non-native than native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. PMID:25462038

  10. A Double Dissociation between Anterior and Posterior Superior Temporal Gyrus for Processing Audiovisual Speech Demonstrated by Electrocorticography.

    Science.gov (United States)

    Ozker, Muge; Schepers, Inga M; Magnotti, John F; Yoshor, Daniel; Beauchamp, Michael S

    2017-06-01

    Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.

  11. Effect of job maintenance training program for employees with chronic disease - a randomized controlled trial on self-efficacy, job satisfaction, and fatigue

    NARCIS (Netherlands)

    Varekamp, Inge; Verbeek, Jos H.; de Boer, Angela; van Dijk, Frank J. H.

    2011-01-01

    Employees with a chronic physical condition may be hampered in job performance due to physical or cognitive limitations, pain, fatigue, psychosocial barriers, or because medical treatment interferes with work. This study investigates the effect of a group-training program aimed at job maintenance.

  12. Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale

    2015-10-01

    Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Changes of the Prefrontal EEG (Electroencephalogram) Activities According to the Repetition of Audio-Visual Learning.

    Science.gov (United States)

    Kim, Yong-Jin; Chang, Nam-Kee

    2001-01-01

    Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…

  14. Cinco discursos da digitalidade audiovisual

    Directory of Open Access Journals (Sweden)

    Gerbase, Carlos

    2001-01-01

    Full Text Available Michel Foucault ensina que toda fala sistemática - inclusive aquela que se afirma “neutra” ou “uma desinteressada visão objetiva do que acontece” - é, na verdade, mecanismo de articulação do saber e, na seqüência, de formação de poder. O aparecimento de novas tecnologias, especialmente as digitais, no campo da produção audiovisual, provoca uma avalanche de declarações de cineastas, ensaios de acadêmicos e previsões de demiurgos da mídia.

  15. The protection of minors in the new audiovisual regulation in Spain

    Directory of Open Access Journals (Sweden)

    José A. Ruiz-San Román, Ph.D.

    2011-01-01

    Full Text Available In 2010 the Spanish Parliament approved the General Law on Audiovisual Communication (GLAC, a new regulation which implements the European Audiovisual Media Services Directive (AVMSD. This research analyses how the regulations focused on the protection of children evolved throughout the legislative process, from the first text drafted by the Government to the text finally approved by Parliament. The research deals with the debates and amendments on harmful content which is prohibited or limited. The main objective of the research is to establish the extent to what the new regulation approved in Spain meets the requirements fixed by the AVMSD and the Spanish Government to guarantee child protection.

  16. Efficient Workplan Management in Maintenance Tasks

    NARCIS (Netherlands)

    Wilson, M.; Roos, N.; Huisman, B.; Witteveen, C.

    2011-01-01

    NedTrain is a Dutch company tasked with performing the maintenance of the rolling stock of the national railway company, NS. NedTrain owns several workshops at different locations. The scheduling in one such workshop will be taken as point of departure for the discussion in this paper. After

  17. Developing an Audiovisual Notebook as a Self-Learning Tool in Histology: Perceptions of Teachers and Students

    Science.gov (United States)

    Campos-Sánchez, Antonio; López-Núñez, Juan-Antonio; Scionti, Giuseppe; Garzón, Ingrid; González-Andrades, Miguel; Alaminos, Miguel; Sola, Tomás

    2014-01-01

    Videos can be used as didactic tools for self-learning under several circumstances, including those cases in which students are responsible for the development of this resource as an audiovisual notebook. We compared students' and teachers' perceptions regarding the main features that an audiovisual notebook should include. Four…

  18. Children with a History of SLI Show Reduced Sensitivity to Audiovisual Temporal Asynchrony: An ERP Study

    Science.gov (United States)

    Kaganovich, Natalya; Schumaker, Jennifer; Leonard, Laurence B.; Gustafson, Dana; Macias, Danielle

    2014-01-01

    Purpose: The authors examined whether school-age children with a history of specific language impairment (H-SLI), their peers with typical development (TD), and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. Method: Fifteen H-SLI children, 15…

  19. Narrativa audiovisual i cinema d'animació per ordinador

    OpenAIRE

    Duran Castells, Jaume

    2009-01-01

    DE LA TESI:Aquesta tesi doctoral estudia les relacions entre la narrativa audiovisual i el cinema d'animació per ordinador i fa una anàlisi al respecte dels llargmetratges de Pixar Animation Studios compresos entre 1995 i 2006.

  20. Maintenance philosophy and program at Cernavoda NPP

    International Nuclear Information System (INIS)

    Bobos, M.; Enciu, G.

    1994-01-01

    Maintenance plays a key role in ensuring safe and reliable operation. An effective maintenance program should ensure that installed equipment operates when needed and the equipment malfunctions or deficiencies are corrected in time and rarely recur. Maintenance includes not only the activities traditionally associated with identifying or correcting current or potential equipment deficiencies but also extends to supporting technical functions for the conduct of these activities (for example, engineering, technical support, chemistry control, radiological protection, industrial safety and training). The maintenance management program should clearly define the relationship among these supporting groups as it is related to overall plant maintenance and promotes the concept of a successful integrated team effort. (Author)

  1. [From oral history to the research film: the audiovisual as a tool of the historian].

    Science.gov (United States)

    Mattos, Hebe; Abreu, Martha; Castro, Isabel

    2017-01-01

    An analytical essay of the process of image production, audiovisual archive formation, analysis of sources, and creation of the filmic narrative of the four historiographic films that form the DVD set Passados presentes (Present pasts) from the Oral History and Image Laboratory of Universidade Federal Fluminense (Labhoi/UFF). From excerpts from the audiovisual archive of Labhoi and the films made, the article analyzes: how the problem of research (the memory of slavery, and the legacy of the slave song in the agrofluminense region) led us to the production of images in a research situation; the analytical shift in relation to the cinematographic documentary and the ethnographic film; the specificities of revisiting the audiovisual collection constituted by the formulation of new research problems.

  2. Comparison for younger and older adults: Stimulus temporal asynchrony modulates audiovisual integration.

    Science.gov (United States)

    Ren, Yanna; Ren, Yanling; Yang, Weiping; Tang, Xiaoyu; Wu, Fengxia; Wu, Qiong; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong

    2018-02-01

    Recent research has shown that the magnitudes of responses to multisensory information are highly dependent on the stimulus structure. The temporal proximity of multiple signal inputs is a critical determinant for cross-modal integration. Here, we investigated the influence that temporal asynchrony has on audiovisual integration in both younger and older adults using event-related potentials (ERP). Our results showed that in the simultaneous audiovisual condition, except for the earliest integration (80-110ms), which occurred in the occipital region for older adults was absent for younger adults, early integration was similar for the younger and older groups. Additionally, late integration was delayed in older adults (280-300ms) compared to younger adults (210-240ms). In audition‑leading vision conditions, the earliest integration (80-110ms) was absent in younger adults but did occur in older adults. Additionally, after increasing the temporal disparity from 50ms to 100ms, late integration was delayed in both younger (from 230 to 290ms to 280-300ms) and older (from 210 to 240ms to 280-300ms) adults. In the audition-lagging vision conditions, integration only occurred in the A100V condition for younger adults and in the A50V condition for older adults. The current results suggested that the audiovisual temporal integration pattern differed between the audition‑leading and audition-lagging vision conditions and further revealed the varying effect of temporal asynchrony on audiovisual integration in younger and older adults. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. The effects of semantic congruency: a research of audiovisual P300-speller.

    Science.gov (United States)

    Cao, Yong; An, Xingwei; Ke, Yufeng; Jiang, Jin; Yang, Hanjun; Chen, Yuqian; Jiao, Xuejun; Qi, Hongzhi; Ming, Dong

    2017-07-25

    Over the past few decades, there have been many studies of aspects of brain-computer interface (BCI). Of particular interests are event-related potential (ERP)-based BCI spellers that aim at helping mental typewriting. Nowadays, audiovisual unimodal stimuli based BCI systems have attracted much attention from researchers, and most of the existing studies of audiovisual BCIs were based on semantic incongruent stimuli paradigm. However, no related studies had reported that whether there is difference of system performance or participant comfort between BCI based on semantic congruent paradigm and that based on semantic incongruent paradigm. The goal of this study was to investigate the effects of semantic congruency in system performance and participant comfort in audiovisual BCI. Two audiovisual paradigms (semantic congruent and incongruent) were adopted, and 11 healthy subjects participated in the experiment. High-density electrical mapping of ERPs and behavioral data were measured for the two stimuli paradigms. The behavioral data indicated no significant difference between congruent and incongruent paradigms for offline classification accuracy. Nevertheless, eight of the 11 participants reported their priority to semantic congruent experiment, two reported no difference between the two conditions, and only one preferred the semantic incongruent paradigm. Besides, the result indicted that higher amplitude of ERP was found in incongruent stimuli based paradigm. In a word, semantic congruent paradigm had a better participant comfort, and maintained the same recognition rate as incongruent paradigm. Furthermore, our study suggested that the paradigm design of spellers must take both system performance and user experience into consideration rather than merely pursuing a larger ERP response.

  4. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    Science.gov (United States)

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  5. La m??sica en la narrativa publicitaria audiovisual. El caso de Coca-Cola

    OpenAIRE

    S??nchez Porras, Mar??a Jos??

    2015-01-01

    En esta investigaci??n se ha realizado un estudio profundo de la m??sica en la publicidad audiovisual y su relaci??n con otros aspectos sonoros y visuales de la publicidad. Para llevarlo a cabo se ha seleccionado una marca concreta, Coca-Cola, debido a su globalizaci??n y reconocimiento. Se ha abordado una nueva perspectiva de an??lisis musical en la publicidad audiovisual, abordando los diferentes elementos de la estructura musical a trav??s de la proyecci??n de los anuncios. Se ha rea...

  6. Long-term effects of 1-year maintenance training on physical functioning and health status in patients with COPD: A randomized controlled study

    DEFF Research Database (Denmark)

    Ringbaek, Thomas; Brondum, Eva; Martinez, Gerd

    2010-01-01

    PURPOSE: To examine whether maintenance training (MT) for 1 year improved the long-term effects of a 7-week chronic obstructive pulmonary disease (COPD) rehabilitation program. METHODS: After a 7-week outpatient rehabilitation program, 96 patients with COPD were randomized to either an MT group (n...... study period. Primary effect parameters were Endurance Shuttle Walk Test (ESWT) time and health status (St. George's Respiratory Questionnaire, SGRQ). Secondary effect parameters were adherence to supervised training, dropout rates, and hospitalization. RESULTS: Compared with the control group, the MT...... or hospital admissions, compared with unsupervised daily training at home. The effect of the MT was closely related to adherence to the program....

  7. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    Science.gov (United States)

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  8. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience.

    Science.gov (United States)

    Lewkowicz, David J; Minar, Nicholas J; Tift, Amy H; Brandon, Melissa

    2015-02-01

    To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Managing Custodial and Maintenance Staffs.

    Science.gov (United States)

    Fickes, Michael

    2001-01-01

    Presents some basic maintenance management techniques that can help schools meet their budgets, preserve staffing levels, meet productivity needs, and sustain quality services. Tips for staff recruitment, training, and retention are explored. (GR)

  10. The effect of combined sensory and semantic components on audio-visual speech perception in older adults

    Directory of Open Access Journals (Sweden)

    Corrina eMaguinness

    2011-12-01

    Full Text Available Previous studies have found that perception in older people benefits from multisensory over uni-sensory information. As normal speech recognition is affected by both the auditory input and the visual lip-movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio-visual integration is affected by top-down semantic processing. We presented participants with audio-visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio-visual blur compared to audio-visual no blur condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable.

  11. [Learning to use semiautomatic external defibrillators through audiovisual materials for schoolchildren].

    Science.gov (United States)

    Jorge-Soto, Cristina; Abelairas-Gómez, Cristian; Barcala-Furelos, Roberto; Gregorio-García, Carolina; Prieto-Saborit, José Antonio; Rodríguez-Núñez, Antonio

    2016-01-01

    To assess the ability of schoolchildren to use a automated external defibrillator (AED) to provide an effective shock and their retention of the skill 1 month after a training exercise supported by audiovisual materials. Quasi-experimental controlled study in 205 initially untrained schoolchildren aged 6 to 16 years old. SAEDs were used to apply shocks to manikins. The students took a baseline test (T0) of skill, and were then randomized to an experimental or control group in the first phase (T1). The experimental group watched a training video, and both groups were then retested. The children were tested in simulations again 1 month later (T2). A total of 196 students completed all 3 phases. Ninety-six (95.0%) of the secondary school students and 54 (56.8%) of the primary schoolchildren were able to explain what a SAED is. Twenty of the secondary school students (19.8%) and 8 of the primary schoolchildren (8.4%) said they knew how to use one. At T0, 78 participants (39.8%) were able to simulate an effective shock. At T1, 36 controls (34.9%) and 56 experimental-group children (60.2%) achieved an effective shock (Paudiovisual instruction improves students' skill in managing a SAED and helps them retain what they learned for later use.

  12. Recording and Validation of Audiovisual Expressions by Faces and Voices

    Directory of Open Access Journals (Sweden)

    Sachiko Takagi

    2011-10-01

    Full Text Available This study aims to further examine the cross-cultural differences in multisensory emotion perception between Western and East Asian people. In this study, we recorded the audiovisual stimulus video of Japanese and Dutch actors saying neutral phrase with one of the basic emotions. Then we conducted a validation experiment of the stimuli. In the first part (facial expression, participants watched a silent video of actors and judged what kind of emotion the actor is expressing by choosing among 6 options (ie, happiness, anger, disgust, sadness, surprise, and fear. In the second part (vocal expression, they listened to the audio part of the same videos without video images while the task was the same. We analyzed their categorization responses based on accuracy and confusion matrix and created a controlled audiovisual stimulus set.

  13. Upgrade of maintenance technologies of Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kamada, Kazuaki

    2005-01-01

    In order to enhance long-term safe and stable operation of Ikata Nuclear Power Plants (NPPs) in more efficient way, maintenance technology upgrade project was started aiming at establishment of simplified and efficient self-maintenance system with affiliated companies. Maintenance technique and supervisor qualification system was introduced after improvement and reinforcement of personnel education and training. Reflecting investigation of maintenance activities of US NPPs and productivity improvement in other industries, preventive maintenance optimization project had been performed such as introduction of key performance indicator (KPI), new system incorporating reliability-centered maintenance (RCM) and condition-based maintenance (CBM), and on-line monitoring and maintenance (OLM) based on risk assessment. Enterprise asset management (EAM) to establish information data base and total productive maintenance (TPM) action for every personnel to participate in self-maintenance was also introduced. (T. Tanaka)

  14. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception.

    Science.gov (United States)

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-10-13

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs' response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs' early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception.

  15. Audiovisual Integration of Speech in a Patient with Broca’s Aphasia

    Directory of Open Access Journals (Sweden)

    Tobias Søren Andersen

    2015-04-01

    Full Text Available Lesions to Broca’s area cause aphasia characterised by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca’s area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca’s area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca’s aphasia did not experience the McGurk illusion suggesting that an intact Broca’s area is necessary for audiovisual integration of speech. Here we describe a patient with Broca’s aphasia who experienced the McGurk illusion. This indicates that an intact Broca’s area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca’s area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke’s aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca’s aphasia.

  16. Electrophysiological evidence for a self-processing advantage during audiovisual speech integration.

    Science.gov (United States)

    Treille, Avril; Vilain, Coriandre; Kandel, Sonia; Sato, Marc

    2017-09-01

    Previous electrophysiological studies have provided strong evidence for early multisensory integrative mechanisms during audiovisual speech perception. From these studies, one unanswered issue is whether hearing our own voice and seeing our own articulatory gestures facilitate speech perception, possibly through a better processing and integration of sensory inputs with our own sensory-motor knowledge. The present EEG study examined the impact of self-knowledge during the perception of auditory (A), visual (V) and audiovisual (AV) speech stimuli that were previously recorded from the participant or from a speaker he/she had never met. Audiovisual interactions were estimated by comparing N1 and P2 auditory evoked potentials during the bimodal condition (AV) with the sum of those observed in the unimodal conditions (A + V). In line with previous EEG studies, our results revealed an amplitude decrease of P2 auditory evoked potentials in AV compared to A + V conditions. Crucially, a temporal facilitation of N1 responses was observed during the visual perception of self speech movements compared to those of another speaker. This facilitation was negatively correlated with the saliency of visual stimuli. These results provide evidence for a temporal facilitation of the integration of auditory and visual speech signals when the visual situation involves our own speech gestures.

  17. Today's and tomorrow's retrieval practice in the audiovisual archive

    NARCIS (Netherlands)

    Huurnink, B.; Snoek, C.G.M.; de Rijke, M.; Smeulders, A.W.M.

    2010-01-01

    Content-based video retrieval is maturing to the point where it can be used in real-world retrieval practices. One such practice is the audiovisual archive, whose users increasingly require fine-grained access to broadcast television content. We investigate to what extent content-based video

  18. Visual simulation study of equipment maintenance in dangerous environment

    International Nuclear Information System (INIS)

    Zhu Bo; Yang Yanhua; Li Shiting

    2010-01-01

    The maintenance characteristics in dangerous environments are analyzed, and the application characteristics of visualized maintenance technology are introduced. The interactive method to implement maintenance simulation is presented using EON simulation platform. Then an interacted Virtual Maintenance Training System (VMTS) is further developed, and the composition and function are described in details. The VMTS can be used in extensive array of application scopes, and it is well compatible to the hardware of virtual reality. (author)

  19. Laboratory services series: a master-slave manipulator maintenance program

    International Nuclear Information System (INIS)

    Jenness, R.G.; Hicks, R.E.; Wicker, C.D.

    1976-12-01

    The volume of master slave manipulator maintenance at Oak Ridge National Laboratory has necessitated the establishment of a repair facility and organization of a specially trained group of craftsmen. Emphasis on cell containment requires the use of manipulator boots and development of precise procedures for accomplishing the maintenance of 287 installed units. A very satisfactory computer programmed maintenance system has been established at the Laboratory to provide an economical approach to preventive maintenance

  20. Review of maintenance personnel practices at nuclear power plants

    International Nuclear Information System (INIS)

    Chockie, A.D.; Badalamente, R.V.; Hostick, C.J.; Vickroy, S.C.; Bryant, J.L.; Imhoff, C.H.

    1984-05-01

    As part of the Nuclear Regulatory Commission (NRC) sponsored Maintenance Qualifications and Staffing Project, the Pacific Northwest Laboratory (PNL) has conducted a preliminary assessment of nuclear power plant (NPP) maintenance practices. As requested by the NRC, the following areas within the maintenance function were examined: personnel qualifications, maintenance training, overtime, shiftwork and staffing levels. The purpose of the assessment was to identify the primary safety-related problems that required further analysis before specific recommendations can be made on the regulations affecting NPP maintenance operations

  1. Audiovisual materials are effective for enhancing the correction of articulation disorders in children with cleft palate.

    Science.gov (United States)

    Pamplona, María Del Carmen; Ysunza, Pablo Antonio; Morales, Santiago

    2017-02-01

    Children with cleft palate frequently show speech disorders known as compensatory articulation. Compensatory articulation requires a prolonged period of speech intervention that should include reinforcement at home. However, frequently relatives do not know how to work with their children at home. To study whether the use of audiovisual materials especially designed for complementing speech pathology treatment in children with compensatory articulation can be effective for stimulating articulation practice at home and consequently enhancing speech normalization in children with cleft palate. Eighty-two patients with compensatory articulation were studied. Patients were randomly divided into two groups. Both groups received speech pathology treatment aimed to correct articulation placement. In addition, patients from the active group received a set of audiovisual materials to be used at home. Parents were instructed about strategies and ideas about how to use the materials with their children. Severity of compensatory articulation was compared at the onset and at the end of the speech intervention. After the speech therapy period, the group of patients using audiovisual materials at home demonstrated significantly greater improvement in articulation, as compared with the patients receiving speech pathology treatment on - site without audiovisual supporting materials. The results of this study suggest that audiovisual materials especially designed for practicing adequate articulation placement at home can be effective for reinforcing and enhancing speech pathology treatment of patients with cleft palate and compensatory articulation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Concurrent audio-visual feedback for supporting drivers at intersections: A study using two linked driving simulators.

    Science.gov (United States)

    Houtenbos, M; de Winter, J C F; Hale, A R; Wieringa, P A; Hagenzieker, M P

    2017-04-01

    A large portion of road traffic crashes occur at intersections for the reason that drivers lack necessary visual information. This research examined the effects of an audio-visual display that provides real-time sonification and visualization of the speed and direction of another car approaching the crossroads on an intersecting road. The location of red blinking lights (left vs. right on the speedometer) and the lateral input direction of beeps (left vs. right ear in headphones) corresponded to the direction from where the other car approached, and the blink and beep rates were a function of the approaching car's speed. Two driving simulators were linked so that the participant and the experimenter drove in the same virtual world. Participants (N = 25) completed four sessions (two with the audio-visual display on, two with the audio-visual display off), each session consisting of 22 intersections at which the experimenter approached from the left or right and either maintained speed or slowed down. Compared to driving with the display off, the audio-visual display resulted in enhanced traffic efficiency (i.e., greater mean speed, less coasting) while not compromising safety (i.e., the time gap between the two vehicles was equivalent). A post-experiment questionnaire showed that the beeps were regarded as more useful than the lights. It is argued that the audio-visual display is a promising means of supporting drivers until fully automated driving is technically feasible. Copyright © 2016. Published by Elsevier Ltd.

  3. Héroes, machos o, simplemente, hombres: una mirada a la representación audiovisual de las (nuevas masculinidades / Heroes, Machomen or, Just Men: A Look at the Audiovisual Representation of the (New Masculinities

    Directory of Open Access Journals (Sweden)

    Francisco A. Zurian Hernández

    2016-09-01

    Full Text Available El presente texto indaga en la evolución de la representación de los hombres en el audiovisual (cine y televisión y cómo dicha representación ha evolucionado desde las representaciones del macho patriarcal a representaciones de nuevas masculinidades, fuera de la influencia de la ideología patriarcal, plurales, no universalistas y con nuevos modelos de hombres.Palabras clave: género, hombres, masculinidades, audiovisual, cine, televisión.AbstractThis text analyzes the representation of men in the cinema and television, paying special attention to the ways in which it has evolved since the `patriarchal macho´ to the new types of masculinity; the latest, a new concept far from the influence of the patriarchal ideology, being plural, concrete and proposing new models of men.Keywords: gender, men, masculinities, audiovisual, cinema, television.

  4. Alterations in audiovisual simultaneity perception in amblyopia

    OpenAIRE

    Richards, Michael D.; Goltz, Herbert C.; Wong, Agnes M. F.

    2017-01-01

    Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged...

  5. Audiovisual sentence recognition not predicted by susceptibility to the McGurk effect.

    Science.gov (United States)

    Van Engen, Kristin J; Xie, Zilong; Chandrasekaran, Bharath

    2017-02-01

    In noisy situations, visual information plays a critical role in the success of speech communication: listeners are better able to understand speech when they can see the speaker. Visual influence on auditory speech perception is also observed in the McGurk effect, in which discrepant visual information alters listeners' auditory perception of a spoken syllable. When hearing /ba/ while seeing a person saying /ga/, for example, listeners may report hearing /da/. Because these two phenomena have been assumed to arise from a common integration mechanism, the McGurk effect has often been used as a measure of audiovisual integration in speech perception. In this study, we test whether this assumed relationship exists within individual listeners. We measured participants' susceptibility to the McGurk illusion as well as their ability to identify sentences in noise across a range of signal-to-noise ratios in audio-only and audiovisual modalities. Our results do not show a relationship between listeners' McGurk susceptibility and their ability to use visual cues to understand spoken sentences in noise, suggesting that McGurk susceptibility may not be a valid measure of audiovisual integration in everyday speech processing.

  6. To select the best tool for generating 3D maintenance data and to set the detailed process for obtaining the 3D maintenance data

    Science.gov (United States)

    Prashanth, B. N.; Roy, Kingshuk

    2017-07-01

    Three Dimensional (3D) maintenance data provides a link between design and technical documentation creating interactive 3D graphical training and maintenance material. It becomes difficult for an operator to always go through huge paper manuals or come running to the computer for doing maintenance of a machine which makes the maintenance work fatigue. Above being the case, a 3D animation makes maintenance work very simple since, there is no language barrier. The research deals with the generation of 3D maintenance data of any given machine. The best tool for obtaining the 3D maintenance is selected and the tool is analyzed. Using the same tool, a detailed process for extracting the 3D maintenance data for any machine is set. This project aims at selecting the best tool for obtaining 3D maintenance data and to select the detailed process for obtaining 3D maintenance data. 3D maintenance reduces use of big volumes of manuals which creates human errors and makes the work of an operator fatiguing. Hence 3-D maintenance would help in training and maintenance and would increase productivity. 3Dvia when compared with Cortona 3D and Deep Exploration proves to be better than them. 3Dvia is good in data translation and it has the best renderings compared to the other two 3D maintenance software. 3Dvia is very user friendly and it has various options for creating 3D animations. Its Interactive Electronic Technical Publication (IETP) integration is also better than the other two software. Hence 3Dvia proves to be the best software for obtaining 3D maintenance data of any machine.

  7. Audiovisual biofeedback breathing guidance for lung cancer patients receiving radiotherapy: a multi-institutional phase II randomised clinical trial.

    Science.gov (United States)

    Pollock, Sean; O'Brien, Ricky; Makhija, Kuldeep; Hegi-Johnson, Fiona; Ludbrook, Jane; Rezo, Angela; Tse, Regina; Eade, Thomas; Yeghiaian-Alvandi, Roland; Gebski, Val; Keall, Paul J

    2015-07-18

    There is a clear link between irregular breathing and errors in medical imaging and radiation treatment. The audiovisual biofeedback system is an advanced form of respiratory guidance that has previously demonstrated to facilitate regular patient breathing. The clinical benefits of audiovisual biofeedback will be investigated in an upcoming multi-institutional, randomised, and stratified clinical trial recruiting a total of 75 lung cancer patients undergoing radiation therapy. To comprehensively perform a clinical evaluation of the audiovisual biofeedback system, a multi-institutional study will be performed. Our methodological framework will be based on the widely used Technology Acceptance Model, which gives qualitative scales for two specific variables, perceived usefulness and perceived ease of use, which are fundamental determinants for user acceptance. A total of 75 lung cancer patients will be recruited across seven radiation oncology departments across Australia. Patients will be randomised in a 2:1 ratio, with 2/3 of the patients being recruited into the intervention arm and 1/3 in the control arm. 2:1 randomisation is appropriate as within the interventional arm there is a screening procedure where only patients whose breathing is more regular with audiovisual biofeedback will continue to use this system for their imaging and treatment procedures. Patients within the intervention arm whose free breathing is more regular than audiovisual biofeedback in the screen procedure will remain in the intervention arm of the study but their imaging and treatment procedures will be performed without audiovisual biofeedback. Patients will also be stratified by treating institution and for treatment intent (palliative vs. radical) to ensure similar balance in the arms across the sites. Patients and hospital staff operating the audiovisual biofeedback system will complete questionnaires to assess their experience with audiovisual biofeedback. The objectives of this

  8. Audiovisual biofeedback breathing guidance for lung cancer patients receiving radiotherapy: a multi-institutional phase II randomised clinical trial

    International Nuclear Information System (INIS)

    Pollock, Sean; O’Brien, Ricky; Makhija, Kuldeep; Hegi-Johnson, Fiona; Ludbrook, Jane; Rezo, Angela; Tse, Regina; Eade, Thomas; Yeghiaian-Alvandi, Roland; Gebski, Val; Keall, Paul J

    2015-01-01

    There is a clear link between irregular breathing and errors in medical imaging and radiation treatment. The audiovisual biofeedback system is an advanced form of respiratory guidance that has previously demonstrated to facilitate regular patient breathing. The clinical benefits of audiovisual biofeedback will be investigated in an upcoming multi-institutional, randomised, and stratified clinical trial recruiting a total of 75 lung cancer patients undergoing radiation therapy. To comprehensively perform a clinical evaluation of the audiovisual biofeedback system, a multi-institutional study will be performed. Our methodological framework will be based on the widely used Technology Acceptance Model, which gives qualitative scales for two specific variables, perceived usefulness and perceived ease of use, which are fundamental determinants for user acceptance. A total of 75 lung cancer patients will be recruited across seven radiation oncology departments across Australia. Patients will be randomised in a 2:1 ratio, with 2/3 of the patients being recruited into the intervention arm and 1/3 in the control arm. 2:1 randomisation is appropriate as within the interventional arm there is a screening procedure where only patients whose breathing is more regular with audiovisual biofeedback will continue to use this system for their imaging and treatment procedures. Patients within the intervention arm whose free breathing is more regular than audiovisual biofeedback in the screen procedure will remain in the intervention arm of the study but their imaging and treatment procedures will be performed without audiovisual biofeedback. Patients will also be stratified by treating institution and for treatment intent (palliative vs. radical) to ensure similar balance in the arms across the sites. Patients and hospital staff operating the audiovisual biofeedback system will complete questionnaires to assess their experience with audiovisual biofeedback. The objectives of this

  9. Educar em comunicação audiovisual: um desafio para a Cuba “atualizada”

    Directory of Open Access Journals (Sweden)

    Liudmila Morales Alfonso

    2017-09-01

    Full Text Available O artigo analisa a pertinência da educação em comunicação audiovisual em Cuba, quando a atualização do modelo econômico e social se transforma em prioridade para o Governo. O “isolamento seletivo” que, por décadas, favoreceu a exclusividade da oferta audiovisual concentrada nos meios de comunicação estatais sofre um impacto a partir de 2008, com o auge do “pacote”, alternativa informal de distribuição de conteúdos. Assim, o público consome produtos audiovisuais estrangeiros de sua preferência, nos horários que escolhe. Contudo e, ante a mudança nos padrões  de consumo audiovisual, admitido por discursos oficiais e da imprensa, a estratégia governamental privilegia alternativas protecionistas ao “banal”, ao contrário de assumir responsabilidades formais para o empoderamento da cidadania.

  10. Extraction of Information of Audio-Visual Contents

    Directory of Open Access Journals (Sweden)

    Carlos Aguilar

    2011-10-01

    Full Text Available In this article we show how it is possible to use Channel Theory (Barwise and Seligman, 1997 for modeling the process of information extraction realized by audiences of audio-visual contents. To do this, we rely on the concepts pro- posed by Channel Theory and, especially, its treatment of representational systems. We then show how the information that an agent is capable of extracting from the content depends on the number of channels he is able to establish between the content and the set of classifications he is able to discriminate. The agent can endeavor the extraction of information through these channels from the totality of content; however, we discuss the advantages of extracting from its constituents in order to obtain a greater number of informational items that represent it. After showing how the extraction process is endeavored for each channel, we propose a method of representation of all the informative values an agent can obtain from a content using a matrix constituted by the channels the agent is able to establish on the content (source classifications, and the ones he can understand as individual (destination classifications. We finally show how this representation allows reflecting the evolution of the informative items through the evolution of audio-visual content.

  11. Use of Audiovisual Media and Equipment by Medical Educationists ...

    African Journals Online (AJOL)

    The most frequently used audiovisual medium and equipment is transparency on Overhead projector (O. H. P.) while the medium and equipment that is barely used for teaching is computer graphics on multi-media projector. This study also suggests ways of improving teaching-learning processes in medical education, ...

  12. 30 CFR 49.6 - Equipment and maintenance requirements.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Equipment and maintenance requirements. 49.6... TRAINING MINE RESCUE TEAMS § 49.6 Equipment and maintenance requirements. (a) Each mine rescue station... indicates that a corrective action is necessary, the corrective action shall be made and the person shall...

  13. Bayesian calibration of simultaneity in audiovisual temporal order judgments.

    Directory of Open Access Journals (Sweden)

    Shinya Yamamoto

    Full Text Available After repeated exposures to two successive audiovisual stimuli presented in one frequent order, participants eventually perceive a pair separated by some lag time in the same order as occurring simultaneously (lag adaptation. In contrast, we previously found that perceptual changes occurred in the opposite direction in response to tactile stimuli, conforming to bayesian integration theory (bayesian calibration. We further showed, in theory, that the effect of bayesian calibration cannot be observed when the lag adaptation was fully operational. This led to the hypothesis that bayesian calibration affects judgments regarding the order of audiovisual stimuli, but that this effect is concealed behind the lag adaptation mechanism. In the present study, we showed that lag adaptation is pitch-insensitive using two sounds at 1046 and 1480 Hz. This enabled us to cancel lag adaptation by associating one pitch with sound-first stimuli and the other with light-first stimuli. When we presented each type of stimulus (high- or low-tone in a different block, the point of simultaneity shifted to "sound-first" for the pitch associated with sound-first stimuli, and to "light-first" for the pitch associated with light-first stimuli. These results are consistent with lag adaptation. In contrast, when we delivered each type of stimulus in a randomized order, the point of simultaneity shifted to "light-first" for the pitch associated with sound-first stimuli, and to "sound-first" for the pitch associated with light-first stimuli. The results clearly show that bayesian calibration is pitch-specific and is at work behind pitch-insensitive lag adaptation during temporal order judgment of audiovisual stimuli.

  14. The audiovisual communication policy of the socialist Government (2004-2009: A neoliberal turn

    Directory of Open Access Journals (Sweden)

    Ramón Zallo, Ph. D.

    2010-01-01

    Full Text Available The first legislature of Jose Luis Rodriguez Zapatero’s government (2004-08 generated important initiatives for some progressive changes in the public communicative system. However, all of these initiatives have been dissolving in the second legislature to give way to a non-regulated and privatizing model that is detrimental to the public service. Three phases can be distinguished, even temporarily: the first one is characterized by interesting reforms; followed by contradictory reforms and, in the second legislature, an accumulation of counter reforms, that lead the system towards a communicative system model completely different from the one devised in the first legislature. This indicates that there has been not one but two different audiovisual policies running the cyclical route of the audiovisual policy from one end to the other. The emphasis has changed from the public service to private concentration; from decentralization to centralization; from the diffusion of knowledge to the accumulation and appropriation of the cognitive capital; from the Keynesian model - combined with the Schumpeterian model and a preference for social access - to a delayed return to the neoliberal model, after having distorted the market through public decisions in the benefit of the most important audiovisual services providers. All this seems to crystallize the impressive process of concentration occurring between audiovisual services providers in two large groups that would be integrated by Mediaset and Sogecable and - in negotiations - between Antena 3 and Imagina. A combination of neo-statist restructuring of the market and neo-liberalism.

  15. Brain networks engaged in audiovisual integration during speech perception revealed by persistent homology-based network filtration.

    Science.gov (United States)

    Kim, Heejung; Hahm, Jarang; Lee, Hyekyoung; Kang, Eunjoo; Kang, Hyejin; Lee, Dong Soo

    2015-05-01

    The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.

  16. Talker Variability in Audiovisual Speech Perception

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-07-01

    Full Text Available A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition. So far, this talker-variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target-word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  17. El Archivo de la Palabra : contexto y proyecto del repositorio audiovisual del Ateneu Barcelonès

    Directory of Open Access Journals (Sweden)

    Alcaraz Martínez, Rubén

    2014-12-01

    Full Text Available Es presenten els resultats del projecte de digitalització del fons audiovisual de l'Ateneu Barcelonès iniciat per l'Àrea de Biblioteca i Arxiu Històric l'any 2011. S'explica la metodologia de treball fent èmfasi en la gestió dels fitxers analògics i nascuts digitals i en la problemàtica derivada dels drets d'autor. Finalment, es presenta l'Arxiu de la Paraula i l'@teneu hub, nous repositori i web respectivament, l'objectiu dels quals és difondre el patrimoni audiovisual i donar accés centralitzat als diferents continguts generats per l'entitat.Se presentan los resultados del proyecto de digitalización del fondo audiovisual del Ateneu Barcelonès iniciado por el área de Biblioteca y Archivo Histórico en 2011. Se explica la metodología de trabajo haciendo énfasis en la gestión de los ficheros analógicos y nacidos digitales y en la problemática derivada de los derechos de autor. Finalmente, se presenta el Archivo de la Palabra y el @teneo hub, nuevos repositorio y web respectivamente, cuyo objetivo es difundir el patrimonio audiovisual y dar acceso centralizado a los diferentes contenidos generados por la entidad.This paper reports on the project to digitize the audiovisual archives of the Ateneu Barcelonès, which was launched by that institution’s Library and Archive department in 2011. The paper explains the methodology used to create the repository, focusing on the management of analogue files and born-digital materials and the question of author’s rights. Finally, it presents the new repository L’Arxiu de la Paraula (the Word Archive and the new website, @teneu hub, which are designed to disseminate the Ateneu’s audiovisual heritage and provide centralized access to its different contents.

  18. Experience in the recruitment, organization and training of operations and maintenance personnel for the Malaysian research reactor

    International Nuclear Information System (INIS)

    Jamal Khair Ibrahim.

    1983-01-01

    The TRIGA Reactor located at the Tun Ismail Atomic Research Centre (PUSPATI) Complex is owned and operated by the Nuclear Energy Unit of the Prime Minster's Department. The operations and maintenance personnel are part and parcel of the national civil service organization. As such, the requirement and remuneration of these personnel are handled by a central federal government personnel management agency in common with personnel from other federal government agencies. In addition, the reactor is the first and only one in Malaysia, a developing country, which is the process of committing herself towards a nuclear power programme. These factors coupled with the absence of an independent reactor operator licensing agency posed unique problems in the recruitment, organization, training and licensing of operations personnel for the facility. The paper discusses these factors and their bearing on the recruitment, training, licensing and career development prospects of the PUSPATI TRIGA Reactor operators. (author)

  19. Audiovisual Cues and Perceptual Learning of Spectrally Distorted Speech

    Science.gov (United States)

    Pilling, Michael; Thomas, Sharon

    2011-01-01

    Two experiments investigate the effectiveness of audiovisual (AV) speech cues (cues derived from both seeing and hearing a talker speak) in facilitating perceptual learning of spectrally distorted speech. Speech was distorted through an eight channel noise-vocoder which shifted the spectral envelope of the speech signal to simulate the properties…

  20. Selected Audio-Visual Materials for Consumer Education. [New Version.

    Science.gov (United States)

    Johnston, William L.

    Ninety-two films, filmstrips, multi-media kits, slides, and audio cassettes, produced between 1964 and 1974, are listed in this selective annotated bibliography on consumer education. The major portion of the bibliography is devoted to films and filmstrips. The main topics of the audio-visual materials include purchasing, advertising, money…

  1. Audiovisual sentence repetition as a clinical criterion for auditory development in Persian-language children with hearing loss.

    Science.gov (United States)

    Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Rahimi, Zahra; Mayahi, Anis

    2017-02-01

    It is important for clinician such as speech-language pathologists and audiologists to develop more efficient procedures to assess the development of auditory, speech and language skills in children using hearing aid and/or cochlear implant compared to their peers with normal hearing. So, the aim of study was the comparison of the performance of 5-to-7-year-old Persian-language children with and without hearing loss in visual-only, auditory-only, and audiovisual presentation of sentence repetition task. The research was administered as a cross-sectional study. The sample size was 92 Persian 5-7 year old children including: 60 with normal hearing and 32 with hearing loss. The children with hearing loss were recruited from Soroush rehabilitation center for Persian-language children with hearing loss in Shiraz, Iran, through consecutive sampling method. All the children had unilateral cochlear implant or bilateral hearing aid. The assessment tool was the Sentence Repetition Test. The study included three computer-based experiments including visual-only, auditory-only, and audiovisual. The scores were compared within and among the three groups through statistical tests in α = 0.05. The score of sentence repetition task between V-only, A-only, and AV presentation was significantly different in the three groups; in other words, the highest to lowest scores belonged respectively to audiovisual, auditory-only, and visual-only format in the children with normal hearing (P audiovisual sentence repetition scores in all the 5-to-7-year-old children (r = 0.179, n = 92, P = 0.088), but audiovisual sentence repetition scores were found to be strongly correlated with auditory-only scores in all the 5-to-7-year-old children (r = 0.943, n = 92, P = 0.000). According to the study's findings, audiovisual integration occurs in the 5-to-7-year-old Persian children using hearing aid or cochlear implant during sentence repetition similar to their peers with normal hearing

  2. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.

    Directory of Open Access Journals (Sweden)

    Kirsten E Smayda

    Full Text Available Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35 and thirty-three older adults (ages 60-90 to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger

  3. Elementos diferenciales en la forma audiovisual de los videojuegos. Vinculación, presencia e inmersión. Differential elements in the audiovisual form of the video games. Bonding, presence and immersion.

    Directory of Open Access Journals (Sweden)

    María Gabino Campos

    2012-01-01

    Full Text Available In just over two decades the video games reach the top positions in the audiovisual sector. Different technical, economic and social facts make that video games are the main reference of entertainment for a growing number of millions. This phenomenon is also due to its creators develop stories with elements of interaction in order to achieve high investment of time by users. We investigate the concepts of bonding, presence and immersion for its implications in the sensory universe of video games and we show the state of the audiovisual research in this field in the first decade of the century.

  4. Panorama de les fonts audiovisuals internacionals en televisió : contingut, gestió i drets

    Directory of Open Access Journals (Sweden)

    López de Solís, Iris

    2014-12-01

    Full Text Available Les cadenes generalistes espanyoles (nacionals i autonòmiques disposen de diferents fonts audiovisuals per informar dels temes internacionals, com ara agències, consorcis de notícies i corresponsalies. En aquest article, a partir de les dades facilitades per diferents cadenes, s'aborda la cobertura, l'ús i la gestió d'aquestes fonts, així com també els seus drets d'ús i arxivament, i s'analitza la història i les eines en línia de les agències més emprades. Finalment, es descriu la tasca diària del departament d'Eurovision de TVE, al qual fa uns mesos s'han incorporat documentalistes que, a més de tractar documentalment el material audiovisual, duen a terme tasques d'edició i de producció.Las cadenas generalistas españolas (nacionales y autonómicas cuentan con diferentes fuentes audiovisuales para informar de los temas internacionales, como agencias, consorcios de noticias y corresponsalías. En este artículo, a partir de los datos facilitados por diferentes cadenas, se aborda la cobertura, el uso y la gestión de dichas fuentes, así como sus derechos de uso y archivado, y se analiza la historia y las herramientas en línea de las agencias más empleadas. Finalmente se describe la labor diaria del departamento de Eurovision de TVE, al que hace unos meses se han incorporado documentalistas que, además de tratar documentalmente el material audiovisual, realizan labores de edición y producción.At both national and regional levels, Spain’s main public service television channels rely upon a number of independent producers of audiovisual content to deliver news on international affairs, including news agencies and consortia and correspondent networks. Using the data provided by different channels, this paper examines the coverage, use and management of these sources as well as the regulations determining their use and storage. It also analyzes the history of the most prominent agencies and the online toolkits they offer

  5. Education as a Basic Element of Improving Professional Important Qualities of Aviation Technical Maintenance Personnel

    Directory of Open Access Journals (Sweden)

    Gorbačovs Oļegs

    2016-12-01

    Full Text Available In this article the importance of professional qualities, competence and their increase, directly dependent on the training of aviation technical maintenance personnel and determination the level of flight safety is covered. This publication analyses necessary training and requirements for aviation technical personnel involved in aircraft maintenance, as well as the requirements for aviation training organizations, defined as per Part-147, for such personnel preparation and training.

  6. A Comparison of the Development of Audiovisual Integration in Children with Autism Spectrum Disorders and Typically Developing Children

    Science.gov (United States)

    Taylor, Natalie; Isaac, Claire; Milne, Elizabeth

    2010-01-01

    This study aimed to investigate the development of audiovisual integration in children with Autism Spectrum Disorder (ASD). Audiovisual integration was measured using the McGurk effect in children with ASD aged 7-16 years and typically developing children (control group) matched approximately for age, sex, nonverbal ability and verbal ability.…

  7. A imagem-ritmo e o videoclipe no audiovisual

    Directory of Open Access Journals (Sweden)

    Felipe de Castro Muanis

    2012-12-01

    Full Text Available A televisão pode ser um espaço de reunião entre som e imagem em um dispositivo que possibilita a imagem-ritmo – dando continuidade à teoria da imagem de Gilles Deleuze, proposta para o cinema. Ela agregaria, simultaneamente, ca-racterísticas da imagem-movimento e da imagem-tempo, que se personificariam na construção de imagens pós-modernas, em produtos audiovisuais não necessariamente narrativos, porém populares. Filmes, videogames, videoclipes e vinhetas em que a música conduz as imagens permitiriam uma leitura mais sensorial. O audiovisual como imagem-música abre, assim, para uma nova forma de percepção além da textual tradicional, fruto da interação entre ritmo, texto e dispositivo. O tempo das imagens em movimento no audiovisual está atrelado inevitável e prioritariamente ao som. Elas agregam possibilidades não narrativas que se realizam, na maioria das vezes, sobre a lógica do ritmo musical, so-bressaindo-se como um valor fundamental, observado nos filmes Sem Destino (1969, Assassinos por Natureza (1994 e Corra Lola Corra (1998.

  8. Sex differences in audiovisual discrimination learning by Bengalese finches (Lonchura striata var. domestica).

    Science.gov (United States)

    Seki, Yoshimasa; Okanoya, Kazuo

    2008-02-01

    Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.

  9. The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.

    Science.gov (United States)

    Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano

    2017-12-01

    Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.

  10. Em busca da visibilidade: apropriações e produções de audiovisuais pelo MST * Searching for visibility: appropriations and audiovisual producions by MST

    Directory of Open Access Journals (Sweden)

    FERNANDO PERLI

    2013-12-01

    Full Text Available Resumo: A formação do Movimento dos Trabalhadores Rurais Sem Terra (MST marcou-se pelo envolvimento de entidades civis e religiosas na produção de diversos meios de comunicação. Dentre a variedade, os audiovisuais tornaram-se instrumentos cada vez mais recorrentes para a capacitação de quadros e a visibilidade social do MST. Na década de 1990, as coberturas dadas pela mídia e a ampliação dos mecanismos de divulgação do movimento social contribuíram para o desenvolvimento de projetos que incentivaram a produção de vídeos-documentários pelos sem-terra. O presente artigo suscita a análise das apropriações e produções de audiovisuais na organização do MST, considerando o sentido político do reconhecimento de audiovisuais para a divulgação do movimento social e o debate sobre o lugar ocupado por diferentes mecanismos de difusão de representações na luta pela reforma agrária.Palavras-chave: Audiovisuais – Movimentos sociais – Movimento dos Trabalhadores Rurais Sem-Terra. Abstract: The formation of the Landless Rural Workers Movement (MST was marked by the participation of civil and religious authorities in the production of various media. Among them, the audiovisual production became an increasingly recurrent instrument to train its cadre and to enable social visibility to the organization. In the 1990s, the media coverage and the expansion of dissemination mechanisms have contributed to the development of projects that stimulated the production of documentaries by the landless rural workers. This paper raises the analysis of appropriations and audiovisual productions within MST, considering the political sense of acknowledging the audiovisual as a means to disseminate the social movement as well as the debate on the place occupied by different diffusion mechanisms of representations in the struggle for agrarian reform.Keywords: Audiovisual – Social movement – Landless Rural Workers Movement.

  11. El documento audiovisual en las emisoras de televisión: selección, conservación y tratamiento

    OpenAIRE

    Rodríguez-Bravo, Blanca

    2004-01-01

    Analysis of the audiovisual material’s peculiarities and its management in television information units. According with the aims of the television information centers: conservation and treatment, the main approaches for the selection of audiovisual messages are considered and some thoughts about their content analysis with a view to their retrieval are carried out.

  12. Las aventuras de Zamba. Some notes on audiovisual communication in a TV channel for children of the argentinian Ministry of Education

    Directory of Open Access Journals (Sweden)

    Sabina Crivelli

    2015-12-01

    Full Text Available From 2009, within the frame of a process of de-monopolization of audiovisual communication, several public policies were developed in Argentina with the purpose of extending participation in the production of audiovisual contents. In this paper, the main aesthetic qualities of an audiovisual program, Las aventuras de Zamba, produced by a State-run TV channel for children, are analyzed. Some tensions risen in the relationship state / market, producing artistic representations about otherness, are examined.

  13. 2010 Canadian Cardiovascular Society/Canadian Heart Rhythm Society Training Standards and Maintenance of Competency in Adult Clinical Cardiac Electrophysiology.

    Science.gov (United States)

    Green, Martin S; Guerra, Peter G; Krahn, Andrew D

    2011-01-01

    The last guidelines on training for adult cardiac electrophysiology (EP) were published by the Canadian Cardiovascular Society in 1996. Since then, substantial changes in the knowledge and practice of EP have mandated a review of the previous guidelines by the Canadian Heart Rhythm Society, an affiliate of the Canadian Cardiovascular Society. Novel tools and techniques also now allow electrophysiologists to map and ablate increasingly complex arrhythmias previously managed with pharmacologic or device therapy. Furthermore, no formal attempt had previously been made to standardize EP training across the country. The 2010 Canadian Cardiovascular Society/Canadian Heart Rhythm Society Training Standards and Maintenance of Competency in Adult Clinical Cardiac Electrophysiology represent a consensus arrived at by panel members from both societies, as well as EP program directors across Canada and other select contributors. In describing program requirements, the technical and cognitive skills that must be acquired to meet training standards, as well as the minimum number of procedures needed in order to acquire these skills, the new guidelines provide EP program directors and committee members with a template to develop an appropriate curriculum for EP training for cardiology fellows here in Canada. Copyright © 2011 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  14. Training in radiation protection of workers at Electricite de France nuclear power plants

    International Nuclear Information System (INIS)

    Aye, Louis

    1980-01-01

    The safety of workers and the population is a major concern of the nuclear industry. In order to carry out its programme of PWR power plants, Electricite de France has largely developed the training in radiation protection of its personnel. Operation workers now represent some 5000 persons; they first receive a formation organized at the national level consisting in training courses, which are completed and continued on the spot. The training makes a wide use of audiovisuals; it is checked by tests and leads to better qualification. Close coordination is sought with outside competent organizations [fr

  15. Audiovisual integration in depth: multisensory binding and gain as a function of distance.

    Science.gov (United States)

    Noel, Jean-Paul; Modi, Kahan; Wallace, Mark T; Van der Stoep, Nathan

    2018-07-01

    The integration of information across sensory modalities is dependent on the spatiotemporal characteristics of the stimuli that are paired. Despite large variation in the distance over which events occur in our environment, relatively little is known regarding how stimulus-observer distance affects multisensory integration. Prior work has suggested that exteroceptive stimuli are integrated over larger temporal intervals in near relative to far space, and that larger multisensory facilitations are evident in far relative to near space. Here, we sought to examine the interrelationship between these previously established distance-related features of multisensory processing. Participants performed an audiovisual simultaneity judgment and redundant target task in near and far space, while audiovisual stimuli were presented at a range of temporal delays (i.e., stimulus onset asynchronies). In line with the previous findings, temporal acuity was poorer in near relative to far space. Furthermore, reaction time to asynchronously presented audiovisual targets suggested a temporal window for fast detection-a range of stimuli asynchronies that was also larger in near as compared to far space. However, the range of reaction times over which multisensory response enhancement was observed was limited to a restricted range of relatively small (i.e., 150 ms) asynchronies, and did not differ significantly between near and far space. Furthermore, for synchronous presentations, these distance-related (i.e., near vs. far) modulations in temporal acuity and multisensory gain correlated negatively at an individual subject level. Thus, the findings support the conclusion that multisensory temporal binding and gain are asymmetrically modulated as a function of distance from the observer, and specifies that this relationship is specific for temporally synchronous audiovisual stimulus presentations.

  16. Effect of job maintenance training program for employees with chronic disease - a randomized controlled trial on self-efficacy, job satisfaction, and fatigue.

    Science.gov (United States)

    Varekamp, Inge; Verbeek, Jos H; de Boer, Angela; van Dijk, Frank J H

    2011-07-01

    Employees with a chronic physical condition may be hampered in job performance due to physical or cognitive limitations, pain, fatigue, psychosocial barriers, or because medical treatment interferes with work. This study investigates the effect of a group-training program aimed at job maintenance. Essential elements of the program are exploration of work-related problems, communication at the workplace, and the development and implementation of solutions. Participants with chronic physical diseases were randomly assigned to the intervention (N=64) or the control group (N=58). Participants were eligible for the study if they had a chronic physical disease, paid employment, experienced work-related problems, and were not on long-term 100% sick leave. Primary outcome measures were self-efficacy in solving work- and disease-related problems (14-70), job dissatisfaction (0-100), fatigue (20-140) and job maintenance measured at 4-, 8-, 12- and 24-month follow-up. We used GLM repeated measures for the analysis. After 24 months, loss to follow-up was 5.7% (7/122). Self-efficacy increased and fatigue decreased significantly more in the experimental than the control group [10 versus 4 points (P=0.000) and 19 versus 8 points (P=0.032), respectively]. Job satisfaction increased more in the experimental group but not significantly [6 versus 0 points (P=0.698)]. Job maintenance was 87% in the experimental and 91% in the control group, which was not a significant difference. Many participants in the control group also undertook actions to solve work-related problems. Empowerment training increases self-efficacy and helps to reduce fatigue complaints, which in the long term could lead to more job maintenance. Better understanding of ways to deal with work-related problems is needed to develop more efficient support for employees with a chronic disease.

  17. Planning and control of maintenance systems modelling and analysis

    CERN Document Server

    Duffuaa, Salih O

    2015-01-01

    Analyzing maintenance as an integrated system with objectives, strategies and processes that need to be planned, designed, engineered, and controlled using statistical and optimization techniques, the theme of this book is the strategic holistic system approach for maintenance. This approach enables maintenance decision makers to view maintenance as a provider of a competitive edge not a necessary evil. Encompassing maintenance systems; maintenance strategic and capacity planning, planned and preventive maintenance, work measurements and standards, material (spares) control, maintenance operations and control, planning and scheduling, maintenance quality, training, and others, this book gives readers an understanding of the relevant methodology and how to apply it to real-world problems in industry. Each chapter includes a number exercises and is suitable as a textbook or a reference for a professionals and practitioners whilst being of interest to industrial engineering, mechanical engineering, electrical en...

  18. Congruent and Incongruent Cues in Highly Familiar Audiovisual Action Sequences: An ERP Study

    Directory of Open Access Journals (Sweden)

    SM Wuerger

    2012-07-01

    Full Text Available In a previous fMRI study we found significant differences in BOLD responses for congruent and incongruent semantic audio-visual action sequences (whole-body actions and speech actions in bilateral pSTS, left SMA, left IFG, and IPL (Meyer, Greenlee, & Wuerger, JOCN, 2011. Here, we present results from a 128-channel ERP study that examined the time-course of these interactions using a one-back task. ERPs in response to congruent and incongruent audio-visual actions were compared to identify regions and latencies of differences. Responses to congruent and incongruent stimuli differed between 240–280 ms, 340–420 ms, and 460–660 ms after stimulus onset. A dipole analysis revealed that the difference around 250 ms can be partly explained by a modulation of sources in the vicinity of the superior temporal area, while the responses after 400 ms are consistent with sources in inferior frontal areas. Our results are in line with a model that postulates early recognition of congruent audiovisual actions in the pSTS, perhaps as a sensory memory buffer, and a later role of the IFG, perhaps in a generative capacity, in reconciling incongruent signals.

  19. Identification of Depressive Signs in Patients and Their Family Members During iPad-based Audiovisual Sessions.

    Science.gov (United States)

    Smith, Carol E; Werkowitch, Marilyn; Yadrich, Donna Macan; Thompson, Noreen; Nelson, Eve-Lynn

    2017-07-01

    Home parenteral nutrition requires a daily life-sustaining intravenous infusion over 12 hours. The daily intravenous infusion home care procedures are stringent, time-consuming tasks for patients and family caregivers who often experience depression. The purposes of this study were (1) to assess home parenteral nutrition patients and caregivers for depression and (2) to assess whether depressive signs can be seen during audiovisual discussion sessions using an Apple iPad Mini. In a clinical trial (N = 126), a subsample of 21 participants (16.7%) had depressive symptoms. Of those with depression, 13 participants were home parenteral nutrition patients and eight were family caregivers; ages ranged from 20 to 79 years (with 48.9 [standard deviation, 17.37] years); 76.2% were female. Individual assessments by the mental health nurse found factors related to depressive symptoms across all 21 participants. A different nurse observed participants for signs of depression when viewing the videotapes of the discussion sessions on audiovisual technology. Conclusions are that depression questionnaires, individual assessment, and observation using audiovisual technology can identify depressive symptoms. Considering the growing provision of healthcare at a distance, via technology, recommendations are to observe and assess for known signs and symptoms of depression during all audiovisual interactions.

  20. Working-memory training in younger and older adults: Training gains, transfer, and maintenance

    Directory of Open Access Journals (Sweden)

    Yvonne eBrehmer

    2012-03-01

    Full Text Available Working memory (WM, a key determinant of many higher-order cognitive functions, declines in old age. Current research attempts to develop process-specific WM training procedures, which may lead to general cognitive improvement. Adaptivity of the training as well as the comparison of training gains to performance changes of an active control group are key factors in evaluating the effectiveness of a specific training program. In the present study, 55 younger adults (20-30 years of age and 45 older adults (60-70 years of age received five weeks of computerized training on various spatial and verbal WM tasks. Half of the sample received adaptive training (i.e., individually adjusted task difficulty, whereas the other half worked on the same task material but on a low task difficulty level (active controls. Performance was assessed using criterion, near-transfer, and far-transfer tasks before training, after 5 weeks of intervention, as well as after a 3-month follow-up interval. Results indicate that (a adaptive training generally led to larger training gains than low-level practice, (b training and transfer gains were somewhat greater for younger than for older adults in some tasks, but comparable across age groups in other tasks, (c far transfer was observed to a test on sustained attention and for a self-rating scale on cognitive functioning in daily life for both young and old, and (d training gains and transfer effects were maintained across the 3-month follow-up interval across age.