Full Text Available This paper focuses on modifications to an institutional repository system using the open source DSpace software to support playback of digital videos embedded within item pages. The changes were made in response to the formation and quick startup of an event capture group within the library that was charged with creating and editing video recordings of library events and speakers. This paper specifically discusses the selection of video formats, changes to the visual theme of the repository to allow embedded playback and captioning support, and modifications and bug fixes to the file downloading subsystem to enable skip-ahead playback of videos via byte-range requests. This paper also describes workflows for transcoding videos in the required formats, creating captions, and depositing videos into the repository.
Fu, Chang-Hong; Chan, Yui-Lam; Ip, Tak-Piu; Siu, Wan-Chi
MPEG digital video is becoming ubiquitous for video storage and communications. It is often desirable to perform various video cassette recording (VCR) functions such as backward playback in MPEG videos. However, the predictive processing techniques employed in MPEG severely complicate the backward-play operation. A straightforward implementation of backward playback is to transmit and decode the whole group-of-picture (GOP), store all the decoded frames in the decoder buffer, and play the decoded frames in reverse order. This approach requires a significant buffer in the decoder, which depends on the GOP size, to store the decoded frames. This approach could not be possible in a severely constrained memory requirement. Another alternative is to decode the GOP up to the current frame to be displayed, and then go back to decode the GOP again up to the next frame to be displayed. This approach does not need the huge buffer, but requires much higher bandwidth of the network and complexity of the decoder. In this paper, we propose a macroblock-based algorithm for an efficient implementation of the MPEG video streaming system to provide backward playback over a network with the minimal requirements on the network bandwidth and the decoder complexity. The proposed algorithm classifies macroblocks in the requested frame into backward macroblocks (BMBs) and forward/backward macroblocks (FBMBs). Two macroblock-based techniques are used to manipulate different types of macroblocks in the compressed domain and the server then sends the processed macroblocks to the client machine. For BMBs, a VLC-domain technique is adopted to reduce the number of macroblocks that need to be decoded by the decoder and the number of bits that need to be sent over the network in the backward-play operation. We then propose a newly mixed VLC/DCT-domain technique to handle FBMBs in order to further reduce the computational complexity of the decoder. With these compressed-domain techniques, the
Shae, Zon-Yin; Chang, Pao-Chi; Chen, Mon-Song
Packet-switching based video conferencing has emerged as one of the most important multimedia applications. Lip synchronization can be disrupted in the packet network as the result of the network properties: packet delay jitters at the capture end, network delay jitters, packet loss, packet arrived out of sequence, local clock mismatch, and video playback overlay with the graphic system. The synchronization problem become more demanding as the real time and multiparty requirement of the video conferencing application. Some of the above mentioned problem can be solved in the more advanced network architecture as ATM having promised. This paper will present some of the solutions to the problems that can be useful at the end station terminals in the massively deployed packet switching network today. The playback scheme in the end station will consist of two units: compression domain buffer management unit and the pixel domain buffer management unit. The pixel domain buffer management unit is responsible for removing the annoying frame shearing effect in the display. The compression domain buffer management unit is responsible for parsing the incoming packets for identifying the complete data blocks in the compressed data stream which can be decoded independently. The compression domain buffer management unit is also responsible for concealing the effects of clock mismatch, lip synchronization, and packet loss, out of sequence, and network jitters. This scheme can also be applied to the multiparty teleconferencing environment. Some of the schemes presented in this paper have been implemented in the Multiparty Multimedia Teleconferencing (MMT) system prototype at the IBM watson research center.
Gunay, Omer; Ozsarac, Ismail; Kamisli, Fatih
Video recording is an essential property of new generation military imaging systems. Playback of the stored video on the same device is also desirable as it provides several operational benefits to end users. Two very important constraints for many military imaging systems, especially for hand-held devices and thermal weapon sights, are power consumption and size. To meet these constraints, it is essential to perform most of the processing applied to the video signal, such as preprocessing, compression, storing, decoding, playback and other system functions on a single programmable chip, such as FPGA, DSP, GPU or ASIC. In this work, H.264/AVC (Advanced Video Coding) compatible video compression, storage, decoding and playback blocks are efficiently designed and implemented on FPGA platforms using FPGA fabric and Altera NIOS II soft processor. Many subblocks that are used in video encoding are also used during video decoding in order to save FPGA resources and power. Computationally complex blocks are designed using FPGA fabric, while blocks such as SD card write/read, H.264 syntax decoding and CAVLC decoding are done using NIOS processor to benefit from software flexibility. In addition, to keep power consumption low, the system was designed to require limited external memory access. The design was tested using 640x480 25 fps thermal camera on CYCLONE V FPGA, which is the ALTERA's lowest power FPGA family, and consumes lower than 40% of CYCLONE V 5CEFA7 FPGA resources on average.
Nyman, Thomas Jonathan; Karlsson, Eric Per Anders; Antfolk, Jan
Research shows that psychological time (i.e., the subjective experience and assessment of the passage of time) is malleable and that the central nervous system re-calibrates temporal information in accordance with situational factors so that psychological time flows slower or faster. Observed motion-speed (e.g., the visual perception of a rolling ball) is an important situational factor which influences the production of time estimates. The present study examines previous findings showing that observed slow and fast motion-speed during video playback respectively results in over- and underproductions of intervals of time. Here, we investigated through three separate experiments: a) the main effect of observed motion-speed during video playback on a time production task and b) the interactive effect of the frame rate (frames per second; fps) and motion-speed during video playback on a time production task. No main effect of video playback-speed or interactive effect between video playback-speed and frame rate was found on time production.
Hämäläinen, Liisa; Rowland, Hannah M; Mappes, Johanna; Thorogood, Rose
Video playback is becoming a common method for manipulating social stimuli in experiments. Parid tits are one of the most commonly studied groups of wild birds. However, it is not yet clear if tits respond to video playback or how their behavioural responses should be measured. Behaviours may also differ depending on what they observe demonstrators encountering. Here we present blue tits (Cyanistes caeruleus) videos of demonstrators discovering palatable or aversive prey (injected with bitter-tasting Bitrex) from coloured feeding cups. First we quantify variation in demonstrators' responses to the prey items: aversive prey provoked high rates of beak wiping and head shaking. We then show that focal blue tits respond differently to the presence of a demonstrator on a video screen, depending on whether demonstrators discover palatable or aversive prey. Focal birds faced the video screen more during aversive prey presentations, and made more head turns. Regardless of prey type, focal birds also hopped more frequently during the presence of a demonstrator (compared to a control video of a different coloured feeding cup in an empty cage). Finally, we tested if demonstrators' behaviour affected focal birds' food preferences by giving individuals a choice to forage from the same cup as a demonstrator, or from the cup in the control video. We found that only half of the individuals made their choice in accordance to social information in the videos, i.e., their foraging choices were not different from random. Individuals that chose in accordance with a demonstrator, however, made their choice faster than individuals that chose an alternative cup. Together, our results suggest that video playback can provide social cues to blue tits, but individuals vary greatly in how they use this information in their foraging decisions.
Full Text Available Video playback is becoming a common method for manipulating social stimuli in experiments. Parid tits are one of the most commonly studied groups of wild birds. However, it is not yet clear if tits respond to video playback or how their behavioural responses should be measured. Behaviours may also differ depending on what they observe demonstrators encountering. Here we present blue tits (Cyanistes caeruleus videos of demonstrators discovering palatable or aversive prey (injected with bitter-tasting Bitrex from coloured feeding cups. First we quantify variation in demonstrators’ responses to the prey items: aversive prey provoked high rates of beak wiping and head shaking. We then show that focal blue tits respond differently to the presence of a demonstrator on a video screen, depending on whether demonstrators discover palatable or aversive prey. Focal birds faced the video screen more during aversive prey presentations, and made more head turns. Regardless of prey type, focal birds also hopped more frequently during the presence of a demonstrator (compared to a control video of a different coloured feeding cup in an empty cage. Finally, we tested if demonstrators’ behaviour affected focal birds’ food preferences by giving individuals a choice to forage from the same cup as a demonstrator, or from the cup in the control video. We found that only half of the individuals made their choice in accordance to social information in the videos, i.e., their foraging choices were not different from random. Individuals that chose in accordance with a demonstrator, however, made their choice faster than individuals that chose an alternative cup. Together, our results suggest that video playback can provide social cues to blue tits, but individuals vary greatly in how they use this information in their foraging decisions.
Full Text Available Visual motion, a critical cue in communication, can be manipulated and studied using video playback methods. A primary concern for the video playback researcher is the degree to which objects presented on video appear natural to the non-human subject. Here we argue that the quality of motion cues on video, as determined by the video's image presentation rate (IPR, are of particular importance in determining a subject's social response behaviour. We present an experiment testing the effect of variations in IPR on pigeon (Columbia livia response behaviour towards video images of courting opposite sex partners. Male and female pigeons were presented with three video playback stimuli, each containing a different social partner. Each stimulus was then modified to appear at one of three IPRs: 15, 30 or 60 progressive (p frames per second. The results showed that courtship behaviour became significantly longer in duration as IPR increased. This finding implies that the IPR significantly affects the perceived quality of motion cues impacting social behaviour. In males we found that the duration of courtship also depended on the social partner viewed and that this effect interacted with the effects of IPR on behaviour. Specifically, the effect of social partner reached statistical significance only when the stimuli were displayed at 60 p, demonstrating the potential for erroneous results when insufficient IPRs are used. In addition to demonstrating the importance of IPR in video playback experiments, these findings help to highlight and describe the role of visual motion processing in communication behaviour.
Guillette, Lauren M; Healy, Susan D
The transmission of information from an experienced demonstrator to a naïve observer often depends on characteristics of the demonstrator, such as familiarity, success or dominance status. Whether or not the demonstrator pays attention to and/or interacts with the observer may also affect social information acquisition or use by the observer. Here we used a video-demonstrator paradigm first to test whether video demonstrators have the same effect as using live demonstrators in zebra finches, and second, to test the importance of visual and vocal interactions between the demonstrator and observer on social information use by the observer. We found that female zebra finches copied novel food choices of male demonstrators they saw via live-streaming video while they did not consistently copy from the demonstrators when they were seen in playbacks of the same videos. Although naive observers copied in the absence of vocalizations by the demonstrator, as they copied from playback of videos with the sound off, females did not copy where there was a mis-match between the visual information provided by the video and vocal information from a live male that was out of sight. Taken together these results suggest that video demonstration is a useful methodology for testing social information transfer, at least in a foraging context, but more importantly, that social information use varies according to the vocal interactions, or lack thereof, between the observer and the demonstrator. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
S. J. Oh
Full Text Available We developed the playback control software for a high-speed playback system which is a component of the Korea-Japan Joint VLBI Correlator (KJJVC. The Mark5B system, which is a recorder and playback system used in the Korean VLBI Network (KVN, has two kinds of operation mode. That is to say, the station unit (SU mode, which is for the present Mark4 system, and the VSI mode, which is for the new VLBI standard interface (VSI system. The software for SU is already developed and widely used in the Mark4 type VLBI system, but the software for VSI has only been developed for recording. The new VLBI system is designed with a VSI interface for compatibility between different systems. Therefore, the playback control software development of the VSI mode is needed for KVN. In this work, we developed the playback control software of the Mark5B VSI mode. The developed playback control software consists of an application part for data playing back, a data input/output part for the VSI board, a module for the StreamStor RAID board, and a user interface part, including an observation time control part. To verify the performance of developed playback control software, the playback and correlation experiments were performed using the real observation data in Mark5B system and KJJVC. To check the observation time control, the data playback experiment was performed between the Mark5B and Raw VLBI Data Buffer (RVDB systems. Through the experimental results, we confirmed the performance of developed playback control software in the Mark5B VSI mode.
Spicer, Scott; Horbal, Andrew
Instructional support is one of the primary reasons academic libraries collect video materials. Nonetheless, no one has published research into the perceptions of the people who install and maintain the equipment used to play these materials in college and university classrooms regarding the longevity of physical media formats. To address this gap…
Wu Wu; Jiulin Hu; Xiaofang Huang; Huijie Chen; Bo Sun
Recreation of flight trajectory is important among research areas. The design of a flight trajectory recreation and playback system is presented in this paper. Rather than transferring the flight data to diagram, graph and table, flight data is visualized on the 3D global of ossimPlanet. ossimPlanet is an open-source 3D global geo-spatial viewer and the system realization is based on analysis it. Users are allowed to choose their interested flight of aerial mission. The aerial ...
Ge, Jing; Zhang, Guoping; Yang, Zongkai
Multimedia technology and networks protocol are the basic technology of the video surveillance system. A network remote video surveillance system based on MPEG-4 video coding standards is designed and implemented in this paper. The advantages of the MPEG-4 are analyzed in detail in the surveillance field, and then the real-time protocol and real-time control protocol (RTP/RTCP) are chosen as the networks transmission protocol. The whole system includes video coding control module, playing back module, network transmission module and network receiver module The scheme of management, control and storage about video data are discussed. The DirectShow technology is used to playback video data. The transmission scheme of digital video processing in networks, RTP packaging of MPEG-4 video stream is discussed. The receiver scheme of video date and mechanism of buffer are discussed. The most of the functions are archived by software, except that the video coding control module is achieved by hardware. The experiment results show that it provides good video quality and has the real-time performance. This system can be applied into wide fields.
A HTTP based video transmission system has been built upon the p2p(peer to peer) network structure utilizing the Java technologies. This makes the video monitoring available to any host which has been connected to the World Wide Web in any method, including those hosts behind firewalls or in isolated sub-networking. In order to achieve this, a video source peer has been developed, together with the client video playback peer. The video source peer can respond to the video stream request in HTTP protocol. HTTP based pipe communication model is developed to speeding the transmission of video stream data, which has been encoded into fragments using the JPEG codec. To make the system feasible in conveying video streams between arbitrary peers on the web, a HTTP protocol based relay peer is implemented as well. This video monitoring system has been applied in a tele-robotic system as a visual feedback to the operator.
Schlecht, Leslie E.; Kutler, Paul (Technical Monitor)
This is a proposal for a general use system based, on the SGI IRIS workstation platform, for recording computer animation to videotape. In addition, this system would provide features for simple editing and enhancement. Described here are a list of requirements for the system, and a proposed configuration including the SGI VideoLab Integrator, VideoMedia VLAN animation controller and the Pioneer rewritable laserdisc recorder.
Gustafson, Peter C.
For many years, photogrammetry has been in use at TRW. During that time, needs have arisen for highly repetitive measurements. In an effort to satisfy these needs in a timely manner, a specialized Robotic Video Photogrammetry System (RVPS) was developed by TRW in conjunction with outside vendors. The primary application for the RVPS has strict accuracy requirements that demand significantly more images than the previously used film-based system. The time involved in taking these images was prohibitive but by automating the data acquisition process, video techniques became a practical alternative to the more traditional film- based approach. In fact, by applying video techniques, measurement productivity was enhanced significantly. Analysis involved was also brought `on-board' to the RVPS, allowing shop floor acquisition and delivery of results. The RVPS has also been applied in other tasks and was found to make a critical improvement in productivity, allowing many more tests to be run in a shorter time cycle. This paper will discuss the creation of the system and TRW's experiences with the RVPS. Highlighted will be the lessons learned during these efforts and significant attributes of the process not common to the standard application of photogrammetry for industrial measurement. As productivity and ease of use continue to drive the application of photogrammetry in today's manufacturing climate, TRW expects several systems, with technological improvements applied, to be in use in the near future.
Diehl, P; Helb, H W; Koch, U T; Lösch, M
In acoustical stimulus-response tests on European blackbirds (Turdus merula ) in cages and an outdoor aviary, alteration in heart rate (HR) was used to measure reaction strenght. HR was measured by radiotelemetry. The miniature transmitters newly developed for this task had to fulfill the following requirements: Simultaneous recordings of HR in several interacting animals; uninterrupted transmission of HR signals, sufficient range and battery life combined with low weight and easy handling properties. The miniature transmitters successfully used in this experiment had a quartz-stabilized oscillator. They weighed between 4.1 and 5.2 g and had a range of 3 m and a lifetime of 72 hrs (circuit diagram, Fig.1). The transmitted signal corresponded to a unitary impulse representing the S-wave of the ECG (Fig. 4b). Implanted electrodes were used to record ECG potentials. The transmitter was carried by the birds like a small rucksack tied to their backs. Electrode implantation and transmitter installation are described in detail. HR signals stored on audio tape were later transformed to frequency curves on a chart recorder (Fig. 3, 4a). Typical HR response curves are shown (Fig. 5). Statistical analysis of the data was performed on a DEC-PDP-11 computer using a special set of programs. The system has been successfully used to provide answers to experimental questions not previously obtainable with classical methods. Copyright © 1986. Published by Elsevier B.V.
This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV
Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi
Holles, Sophie; Simpson, Stephen D; Lecchini, David; Radford, Andrew N
Playbacks are a useful tool for conducting well-controlled and replicated experiments on the effects of anthropogenic noise, particularly for repeated exposures. However, playbacks are unlikely to fully reproduce original sources of anthropogenic noise. Here we examined the sound pressure and particle acceleration of boat noise playbacks in a field experiment and reveal that although there remain recognized limitations, the signal-to-noise ratios of boat playbacks to ambient noise do not exceed those of a real boat. The experimental setup tested is therefore of value for use in experiments on the effects of repeated exposure of aquatic animals to boat noise.
Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))
The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.
Holt, N. I. (Inventor)
A video tape recorder is disclosed of sufficient bandwidth to record monochrome television signals or standard NTSC field sequential color at current European and American standards. The system includes scan conversion means for instantaneous playback at scanning standards different from those at which the recording is being made.
Offering ready access to the security industry's cutting-edge digital future, Intelligent Network Video provides the first complete reference for all those involved with developing, implementing, and maintaining the latest surveillance systems. Pioneering expert Fredrik Nilsson explains how IP-based video surveillance systems provide better image quality, and a more scalable and flexible system at lower cost. A complete and practical reference for all those in the field, this volume:Describes all components relevant to modern IP video surveillance systemsProvides in-depth information about ima
Hsu, Charles; Szu, Harold
An intelligent video surveillance system is able to detect and identify abnormal and alarming situations by analyzing object movement. The Smart Sensing Surveillance Video (S3V) System is proposed to minimize video processing and transmission, thus allowing a fixed number of cameras to be connected on the system, and making it suitable for its applications in remote battlefield, tactical, and civilian applications including border surveillance, special force operations, airfield protection, perimeter and building protection, and etc. The S3V System would be more effective if equipped with visual understanding capabilities to detect, analyze, and recognize objects, track motions, and predict intentions. In addition, alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. The S3V System capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.
Alsmirat, Mohammad Abdullah
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.
Akeroyd, Michael A; Chambers, John; Bullock, David; Palmer, Alan R; Summerfield, A Quentin; Nelson, Philip A; Gatehouse, Stuart
Cross-talk cancellation is a method for synthesizing virtual auditory space using loudspeakers. One implementation is the "Optimal Source Distribution" technique [T. Takeuchi and P. Nelson, J. Acoust. Soc. Am. 112, 2786-2797 (2002)], in which the audio bandwidth is split across three pairs of loudspeakers, placed at azimuths of +/-90 degrees, +/-15 degrees, and +/-3 degrees, conveying low, mid, and high frequencies, respectively. A computational simulation of this system was developed and verified against measurements made on an acoustic system using a manikin. Both the acoustic system and the simulation gave a wideband average cancellation of almost 25 dB. The simulation showed that when there was a mismatch between the head-related transfer functions used to set up the system and those of the final listener, the cancellation was reduced to an average of 13 dB. Moreover, in this case the binaural interaural time differences and interaural level differences delivered by the simulation of the optimal source distribution (OSD) system often differed from the target values. It is concluded that only when the OSD system is set up with "matched" head-related transfer functions can it deliver accurate binaural cues.
From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated
... From the Federal Register Online via the Government Publishing Office FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 76 Open Video Systems AGENCY: Federal Communications Commission. ACTION: Final rule... Open Video Systems. DATES: The amendments to 47 CFR 76.1505(d) and 76.1506(d), (l)(3), and (m)(2...
Robert C. Lorenz
Full Text Available Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual towards playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training.Fifty healthy participants were randomly assigned to a video game training (TG or control group (CG. Before and after training/control period, functional magnetic resonance imaging (fMRI was conducted using a non-video game related reward task.At pretest, both groups showed strongest activation in ventral striatum (VS during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated.This longitudinal study revealed that video game training may preserve reward responsiveness in the ventral striatum in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.
Lorenz, Robert C; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone
Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.
Byrnes, Patrick D.; Higgins, William E.
Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient's airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.
Zhang, Zhengbing; Deng, Huiping; Xia, Zhenhua
Video systems have been widely used in many fields such as conferences, public security, military affairs and medical treatment. With the rapid development of FPGA, SOPC has been paid great attentions in the area of image and video processing in recent years. A network video transmission system based on SOPC is proposed in this paper for the purpose of video acquisition, video encoding and network transmission. The hardware platform utilized to design the system is an SOPC board of model Altera's DE2, which includes an FPGA chip of model EP2C35F672C6, an Ethernet controller and a video I/O interface. An IP core, known as Nios II embedded processor, is used as the CPU of the system. In addition, a hardware module for format conversion of video data, and another module to realize Motion-JPEG have been designed with Verilog HDL. These two modules are attached to the Nios II processor as peripheral equipments through the Avalon bus. Simulation results show that these two modules work as expected. Uclinux including TCP/IP protocol as well as the driver of Ethernet controller is chosen as the embedded operating system and an application program scheme is proposed.
Laptenok, V. D.; Seregin, Y. N.; Bocharov, A. N.; Murygin, A. V.; Tynchenko, V. S.
Equipment of video observation system for electron beam welding process was developed. Construction of video observation system allows to reduce negative effects on video camera during the process of electron beam welding and get qualitative images of this process.
Kasprowicz, Grzegorz; Pastuszak, Grzegorz; Poźniak, Krzysztof; Trochimiuk, Maciej; Abramowski, Andrzej; Gaska, Michal; Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Pawel; Jewartowski, Blazej; Frasunek, Przemysław; Nalbach-Moszynska, Małgorzata; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata
The purpose of the project is development of a platform which integrates video signals from many sources. The signals can be sourced by existing analogue CCTV surveillance installations, recent internet-protocol (IP) cameras or single cameras of any type. The system will consist of portable devices that provide conversion, encoding, transmission and archiving. The sharing subsystem will use distributed file system and also user console which provides simultaneous access to any of video streams in real time. The system is fully modular so its extension is possible, both from hardware and software side. Due to standard modular technology used, partial technology modernization is also possible during a long exploitation period.
Full Text Available Automated video object recognition is a topic of emerging importance in both defense and civilian applications. This work describes an accurate and low-power neuromorphic architecture and system for real-time automated video object recognition. Our system, Neuormorphic Visual Understanding of Scenes (NEOVUS, is inspired by recent findings in computational neuroscience on feed-forward object detection and classification pipelines for processing and extracting relevant information from visual data. The NEOVUS architecture is inspired by the ventral (what and dorsal (where streams of the mammalian visual pathway and combines retinal processing, form-based and motion-based object detection, and convolutional neural nets based object classification. Our system was evaluated by the Defense Advanced Research Projects Agency (DARPA under the NEOVISION2 program on a variety of urban area video datasets collected from both stationary and moving platforms. The datasets are challenging as they include a large number of targets in cluttered scenes with varying illumination and occlusion conditions. The NEOVUS system was also mapped to commercially available off-the-shelf hardware. The dynamic power requirement for the system that includes a 5.6Mpixel retinal camera processed by object detection and classification algorithms at 30 frames per second was measured at 21.7 Watts (W, for an effective energy consumption of 5.4 nanoJoules (nJ per bit of incoming video. In a systematic evaluation of five different teams by DARPA on three aerial datasets, the NEOVUS demonstrated the best performance with the highest recognition accuracy and at least three orders of magnitude lower energy consumption than two independent state of the art computer vision systems. These unprecedented results show that the NEOVUS has the potential to revolutionize automated video object recognition towards enabling practical low-power and mobile video processing applications.
Resilient Systems sponsored the first ESP project in December 2013. This pilot study’s objective was to examine the feasibility of using gaming a as a means...via photos or video, and add their own commentary to complete the package. This content is typically posted in message board threads, which draw in...was done by speeding up the playback of the video in between the voice narration parts of the video. However, Group 1 had a secondary role in this
King, Stephanie L
Over the years, playback experiments have helped further our understanding of the wonderful world of animal communication. They have provided fundamental insights into animal behaviour and the function of communicative signals in numerous taxa. As important as these experiments are, however, there is strong evidence to suggest that the information conveyed in a signal may only have value when presented interactively. By their very nature, signalling exchanges are interactive and therefore, an interactive playback design is a powerful tool for examining the function of such exchanges. While researchers working on frog and songbird vocal interactions have long championed interactive playback, it remains surprisingly underused across other taxa. The interactive playback approach is not limited to studies of acoustic signalling, but can be applied to other sensory modalities, including visual, chemical and electrical communication. Here, I discuss interactive playback as a potent yet underused technique in the field of animal behaviour. I present a concise review of studies that have used interactive playback thus far, describe how it can be applied, and discuss its limitations and challenges. My hope is that this review will result in more scientists applying this innovative technique to their own study subjects, as a means of furthering our understanding of the function of signalling interactions in animal communication systems. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Xia, Jiali; Jin, Jesse S.
Video-On-Demand is a new development on the Internet. In order to manage the rich multimedia information and the large number of users, we present an Internet Video-On-Demand system with some E- Commerce flavors. This paper presents the system architecture and technologies required in the implementation. It provides interactive Video-On-Demand services in which the user has a complete control over the session presentation. It allows the user to select and receive specific video information by retrieving the database. For improving the performance of video information retrieval and management, the video information is represented by hierarchical video metadata in XML format. Video metadatabase stored the video information in this hierarchical structure and allows user to search the video shots at different semantic levels in the database. To browse the searched video, the user not only has full-function VCR capabilities as the traditional Video-On-Demand, but also can browse the video in a hierarchical method to view different shots. In order to perform management of large number of users over the Internet, a membership database designed and managed in an E-Commerce environment, which allows the user to access the video database based on different access levels.
Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.
possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.
Spector, B.; Eilbert, L.; Finando, S.; Fukuda, F.
A Video Integrated Measurement (VIM) System is described which incorporates the use of various noninvasive diagnostic procedures (moire contourography, electromyography, posturometry, infrared thermography, etc.), used individually or in combination, for the evaluation of neuromusculoskeletal and other disorders and their management with biofeedback and other therapeutic procedures. The system provides for measuring individual diagnostic and therapeutic modes, or multiple modes by split screen superimposition, of real time (actual) images of the patient and idealized (ideal-normal) models on a video monitor, along with analog and digital data, graphics, color, and other transduced symbolic information. It is concluded that this system provides an innovative and efficient method by which the therapist and patient can interact in biofeedback training/learning processes and holds considerable promise for more effective measurement and treatment of a wide variety of physical and behavioral disorders.
Gershkoff, I.; Haspert, J. K.; Morgenstern, B.
A cost model that can be used to systematically identify the costs of procuring and operating satellite linked communications systems is described. The user defines a network configuration by specifying the location of each participating site, the interconnection requirements, and the transmission paths available for the uplink (studio to satellite), downlink (satellite to audience), and voice talkback (between audience and studio) segments of the network. The model uses this information to calculate the least expensive signal distribution path for each participating site. Cost estimates are broken downy by capital, installation, lease, operations and maintenance. The design of the model permits flexibility in specifying network and cost structure.
Harris, J Berton C; Haskell, David G
Although recreational birdwatchers may benefit conservation by generating interest in birds, they may also have negative effects. One such potentially negative impact is the widespread use of recorded vocalizations, or "playback," to attract birds of interest, including range-restricted and threatened species. Although playback has been widely used to test hypotheses about the evolution of behavior, no peer-reviewed study has examined the impacts of playback in a birdwatching context on avian behavior. We studied the effects of simulated birdwatchers' playback on the vocal behavior of Plain-tailed Wrens Thryothorus euophrys and Rufous Antpittas Grallaria rufula in Ecuador. Study species' vocal behavior was monitored for an hour after playing either a single bout of five minutes of song or a control treatment of background noise. We also studied the effects of daily five minute playback on five groups of wrens over 20 days. In single bout experiments, antpittas made more vocalizations of all types, except for trills, after playback compared to controls. Wrens sang more duets after playback, but did not produce more contact calls. In repeated playback experiments, wren responses were strong at first, but hardly detectable by day 12. During the study, one study group built a nest, apparently unperturbed, near a playback site. The playback-induced habituation and changes in vocal behavior we observed suggest that scientists should consider birdwatching activity when selecting research sites so that results are not biased by birdwatchers' playback. Increased vocalizations after playback could be interpreted as a negative effect of playback if birds expend energy, become stressed, or divert time from other activities. In contrast, the habituation we documented suggests that frequent, regular birdwatchers' playback may have minor effects on wren behavior.
Mooney, J. B. [Schweitzer Engineering Laboratories Inc., Pullman, WA (United States)
The need to test protection schemes under realistic power system conditions, as opposed to doing steady-state tests, was discussed. Transient testing is one of the methods that gives engineers the confidence they need to use newly developed protection schemes. Real-time digital simulators typically use a program like the Electromagnetic Transients Program (EMTP) to model power systems. The digitally generated output of EMTP is converted to analog signals in real-time mode via digital-to-analog converters and power amplifiers. COMTRADE is one of the standards that provides a format for playback of power system events. This paper describes the method for transient testing of protective relays using EMTP as the means of modeling the power system, and for replaying the modeled disturbances in an automatic batch mode playback, for a complete, cost effective and thorough testing of the protection system. The system is most useful in situations where the protection system is reasonably predictable or where the number of cases is relatively small. 3 refs., 4 figs.
stranding has not been elucidated. We now know that beaked whales react strongly to sonar, killer whale , and bandlimited noise by ceasing echolocation and...a more complete picture of pilot whale baseline behavior and vocalization rates in different social contexts, as well as calculating more exact...follows and attempts at tagging these animals, no tags were successfully deployed. In 2011, playbacks of both mammal-eating killer whale calls and
Petkovic, M.; Jonker, Willem
An increasing number of large publicly available video libraries results in a demand for techniques that can manipulate the video data based on content. In this paper, we present a content-based video retrieval system called Cobra. The system supports automatic extraction and retrieval of high-level
Ádám Z. Lendvai
Full Text Available Playbacks of visual or audio stimuli to wild animals is a widely used experimental tool in behavioral ecology. In many cases, however, playback experiments are constrained by observer limitations such as the time observers can be present, or the accuracy of observation. These problems are particularly apparent when playbacks are triggered by specific events, such as performing a specific behavior, or are targeted to specific individuals. We developed a low-cost automated playback/recording system, using two field-deployable devices: radio-frequency identification (RFID readers and Raspberry Pi micro-computers. This system detects a specific passive integrated transponder (PIT tag attached to an individual, and subsequently plays back the stimuli, or records audio or visual information. To demonstrate the utility of this system and to test one of its possible applications, we tagged female and male tree swallows (Tachycineta bicolor from two box-nesting populations with PIT tags and carried out playbacks of nestling begging calls every time focal females entered the nestbox over a six-hour period. We show that the RFID-Raspberry Pi system presents a versatile, low-cost, field-deployable system that can be adapted for many audio and visual playback purposes. In addition, the set-up does not require programming knowledge, and it easily customized to many other applications, depending on the research questions. Here, we discuss the possible applications and limitations of the system. The low cost and the small learning curve of the RFID-Raspberry Pi system provides a powerful new tool to field biologists.
Lendvai, Ádám Z; Akçay, Çağlar; Weiss, Talia; Haussmann, Mark F; Moore, Ignacio T; Bonier, Frances
Playbacks of visual or audio stimuli to wild animals is a widely used experimental tool in behavioral ecology. In many cases, however, playback experiments are constrained by observer limitations such as the time observers can be present, or the accuracy of observation. These problems are particularly apparent when playbacks are triggered by specific events, such as performing a specific behavior, or are targeted to specific individuals. We developed a low-cost automated playback/recording system, using two field-deployable devices: radio-frequency identification (RFID) readers and Raspberry Pi micro-computers. This system detects a specific passive integrated transponder (PIT) tag attached to an individual, and subsequently plays back the stimuli, or records audio or visual information. To demonstrate the utility of this system and to test one of its possible applications, we tagged female and male tree swallows (Tachycineta bicolor) from two box-nesting populations with PIT tags and carried out playbacks of nestling begging calls every time focal females entered the nestbox over a six-hour period. We show that the RFID-Raspberry Pi system presents a versatile, low-cost, field-deployable system that can be adapted for many audio and visual playback purposes. In addition, the set-up does not require programming knowledge, and it easily customized to many other applications, depending on the research questions. Here, we discuss the possible applications and limitations of the system. The low cost and the small learning curve of the RFID-Raspberry Pi system provides a powerful new tool to field biologists.
4K video is a new format. At 3840 × 2160 resolution, it has 4 times the resolution of standard 1080 high definition (HD) video. Magnification can be done without loss of resolution. This study uses 4K video for video-stroboscopy. Forty-six patients were examined by conventional video-stroboscopy (digital 3 chip CCD) and compared with 4K video-stroboscopy. The video was recorded on a Blackmagic 4K cinema production camera in CinemaDNG RAW format. The video was played back on a 4K monitor and compared to standard video. Pathological conditions included: polyps, scar, cysts, cancer, sulcus, and nodules. Successful 4K video recordings were achieved in all subjects using a 70° rigid endoscope. The camera system is bulky. The examination is performed similarly to standard video-stroboscopy. Playback requires a 4K monitor. As expected, the images were far clearer in detail than standard video. Stroboscopy video using the 4K camera was consistently able to show more detail. Two patients had diagnosis change after 4K viewing. 4K video is an exciting new technology that can be applied to laryngoscopy. It allows for cinematic 4K quality recordings. Both continuous and stroboscopic light can be used for visualization. Its clinical utility is feasible, but usefulness must be proven. © The Author(s) 2015.
Kearns, G.D.; Kwartin, N.B.; Brinker, D.F.; Haramis, G.M.
We used playback of rail vocalizations and improved trap design to enhance capture of fall migrant Soras (Porzana carolina) and Virginia Rails (Rallus limicola) in marshes bordering the tidal Patuxent River, Maryland. Custom-fabricated microchip message repeating sound systems provided digitally recorded sound for long-life, high-quality playback. A single sound system accompanied each 30-45 m long drift fence trap line fitted with 1-3 cloverleaf traps. Ramped funnel entrances improved retention of captured rails and deterred raccoon (Procyon lotor) predation. Use of playback and improved trap design increased trap success by over an order of magnitude and resulted in capture and banding of 2315 Soras and 276 Virginia Rails during September and October 1993-1997. The Sora captures more than doubled the banding records for the species in North America. This capture success demonstrates the efficacy of banding large numbers of Soras and Virginia Rails on migration and winter concentration areas.
Smithwick, Erica; Baxter, Emily; Kim, Kyung; Edel-Malizia, Stephanie; Rocco, Stevie; Blackstock, Dean
Two forms of interactive video were assessed in an online course focused on conservation. The hypothesis was that interactive video enhances student perceptions about learning and improves mental models of social-ecological systems. Results showed that students reported greater learning and attitudes toward the subject following interactive video.…
Bardram, Jakob Eyvind; Bossen, Claus; Lykke-Olesen, Andreas
Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...
Bardram, Jakob; Bossen, Claus; Lykke-Olesen, Andreas
Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...
Video Streaming is nowadays the Internet’s biggest source of consumer traffic. Traditional content providers rely on centralised client-server model for distributing their video streaming content. The current generation is moving from being passive viewers, or content consumers, to active content
Bescos, Jesus; Martinez, Jose M.; Cabrera, Julian M.; Cisneros, Guillermo
This paper describes the first stages of a research project that is currently being developed in the Image Processing Group of the UPM. The aim of this effort is to add video capabilities to the Storage and Retrieval Information System already working at our premises. Here we will focus on the early design steps of a Video Information System. For this purpose, we present a review of most of the reported techniques for video temporal segmentation and semantic segmentation, previous steps to afford the content extraction task, and we discuss them to select the more suitable ones. We then outline a block design of a temporal segmentation module, and present guidelines to the design of the semantic segmentation one. All these operations trend to facilitate automation in the extraction of low level features and semantic features that will finally take part of the video descriptors.
Zhao, Heng; Wang, Xiang-jun
This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.
Full Text Available In order to support high-definition video transmission, an implementation of video transmission system based on Long Term Evolution is designed. This system is developed on Xilinx Virtex-6 FPGA ML605 Evaluation Board. The paper elaborates the features of baseband link designed in Xilinx ISE and protocol stack designed in Xilinx SDK, and introduces the process of setting up hardware and software platform in Xilinx XPS. According to test, this system consumes less hardware resource and is able to transmit bidirectional video clearly and stably.
Botella, Guillermo; García, Carlos; Meyer-Bäse, Uwe
This contribution focuses on different topics covered by the special issue titled `Hardware Implementation of Machine vision Systems' including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation, image enhancement/restoration, image/video compression, image similarity and retrieval, satellite image processing, medical image processing, motion estimation, neuromorphic and bioinspired vision systems, video processing, image formation and physics based vision, 3D processing/coding, scene understanding, and multimedia.
Käsbach, Johannes; Favrot, Sylvain Emmanuel; Buchholz, Jörg
Planar (2D) and periphonic (3D) higher-order Ambisonics (HOA) playback systems are widely used in multi-channel audio applications. For a given Ambisonics order, 2D systems require far less loudspeakers and provide a larger spatial resolution but cannot naturally reproduce elevated sound sources....... can be significantly increased by adding 2D components and thereby approaching 2D system’s performance. Simultaneously, frequency spectrum properties of horizontal sound sources were restored and did not show a low pass filtering effect as it is present in 3D HOA systems........ In order to combine the benefits of 2D and 3D systems, a higher order 2D playback system can be mixed with a lower order 3D system. In the present study, a mixed-order Ambisonics playback system was realised by extending the spherical harmonics decomposition of a 3D sound field with additional horizontal...... components. The performance of the system was analysed by considering a small and a large loudspeaker setup, allowing for different combinations of 2D and 3D Ambisonics orders. An objective evaluation showed that the systems provided a high spatial resolution for horizontal sources while producing a smooth...
van der Schaar-Mitrea, Mihaela; de With, Peter H. N.
The diversity in TV images has augmented with the increased application of computer graphics. In this paper we study z coding system that supports both the lossless coding of such graphics data and regular lossy video compression. The lossless coding techniques are based on runlength and arithmetical coding. For video compression, we introduce a simple block predictive coding technique featuring individual pixel access, so that it enables a gradual shift from lossless coding of graphics to the lossy coding of video. An overall bit rate control completes the system. Computer simulations show a very high quality with a compression factor between 2-3.
Full Text Available Most universities are already implementing wired and wireless network that is used to access integrated information systems and the Internet. At present it is important to do research on the influence of the broadcasting system through the access point for video transmitter learning in the university area. At every university computer network through the access point must also use the cable in its implementation. These networks require cables that will connect and transmit data from one computer to another computer. While wireless networks of computers connected through radio waves. This research will be a test or assessment of how the influence of the network using the WLAN access point for video broadcasting means learning from the server to the client. Instructional video broadcasting from the server to the client via the access point will be used for video broadcasting means of learning. This study aims to understand how to build a wireless network by using an access point. It also builds a computer server as instructional videos supporting software that can be used for video server that will be emitted by broadcasting via the access point and establish a system of transmitting video from the server to the client via the access point.
Mohamed M. Fouad
Full Text Available In this paper, we present a modified inter-view prediction Multiview Video Coding (MVC scheme from the perspective of viewer's interactivity. When a viewer requests some view(s, our scheme leads to lower transmission bit-rate. We develop an interactive multiview video streaming system exploiting that modified MVC scheme. Conventional interactive multiview video systems require high bandwidth due to redundant data being transferred. With real data test sequences, clear improvements are shown using the proposed interactive multiview video system compared to competing ones in terms of the average transmission bit-rate and storage size of the decoded (i.e., transferred data with comparable rate-distortion.
... shall be “Open Video System Notice of Intent” and “Attention: Media Bureau.” This wording shall be... Notice of Intent with the Office of the Secretary and the Bureau Chief, Media Bureau. The Notice of... capacity through a fair, open and non-discriminatory process; the process must be insulated from any bias...
Full Text Available A novel video conference system is developed. Suppose that three people A, B, and C attend the video conference, the proposed system enables eye contact among every pair. Furthermore, when B and C chat, A feels as if B and C were facing each other (eye contact seems to be kept among B and C. In the case of a triangle video conference, the respective video system is composed of a half mirror, two video cameras, and two monitors. Each participant watches other participants' images that are reflected by the half mirror. Cameras are set behind the half mirror. Since participants' image (face and the camera position are adjusted to be the same direction, eye contact is kept and conversation becomes very natural compared with conventional video conference systems where participants' eyes do not point to the other participant. When 3 participants sit at the vertex of an equilateral triangle, eyes can be kept even for the situation mentioned above (eye contact between B and C from the aspect of A. Eye contact can be kept not only for 2 or 3 participants but also any number of participants as far as they sit at the vertex of a regular polygon.
Full Text Available This paper reports on the development of an automated embedded video surveillance system using two customized embedded RISC processors. The application is partitioned into object tracking and video stream encoding subsystems. The real-time object tracker is able to detect and track moving objects by video images of scenes taken by stationary cameras. It is based on the block-matching algorithm. The video stream encoding involves the optimization of an international telecommunications union (ITU-T H.263 baseline video encoder for quarter common intermediate format (QCIF and common intermediate format (CIF resolution images. The two subsystems running on two processor cores were integrated and a simple protocol was added to realize the automated video surveillance system. The experimental results show that the system is capable of detecting, tracking, and encoding QCIF and CIF resolution images with object movements in them in real-time. With low cycle-count, low-transistor count, and low-power consumption requirements, the system is ideal for deployment in remote locations.
Giroire, Frédéric; Huin, Nicolas
International audience; —We study distributed systems for live video streaming. These systems can be of two types: structured and un-structured. In an unstructured system, the diffusion is done opportunistically. The advantage is that it handles churn, that is the arrival and departure of users, which is very high in live streaming systems, in a smooth way. On the opposite, in a structured system, the diffusion of the video is done using explicit diffusion trees. The advantage is that the dif...
Al-Hamad, A.; Moussa, A.; El-Sheimy, N.
The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.
Craciunescu, Razvan; Mihovska, Albena Dimitrova; Kyriazakos, Sofoklis
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current research focus includes on the emotion...... recognition from the face and hand gesture recognition. Gesture recognition enables humans to communicate with the machine and interact naturally without any mechanical devices. This paper investigates the possibility to use non-audio/video sensors in order to design a low-cost gesture recognition device...
Su, Ang; Zhang, Yueqiang; Dong, Jing; Xu, Yuhua; Zhu, Xianwei; Zhang, Xiaohu
The high portability of small Unmanned Aircraft Vehicles (UAVs) makes them play an important role in surveillance and reconnaissance tasks, so the military and civilian desires for UAVs are constantly growing. Recently, we have developed a real-time video exploitation system for our small UAV which is mainly used in forest patrol tasks. Our system consists of six key models, including image contrast enhancement, video stabilization, mosaicing, salient target indication, moving target indication, and display of the footprint and flight path on map. Extensive testing on the system has been implemented and the result shows our system performed well.
Kapustin, A. A.; Razumovskii, V. N.; Iatsevich, G. B.
A spatial-spectral analysis method is considered for a laser scanning video system with the phase processing of a received signal, on a modulation frequency. Distortions caused by the system are analyzed, and a general problem is reduced for the case of a cylindrical surface. The approach suggested can also be used for scanning microwave systems.
... system operator may charge different rates to different classes of video programming providers, provided... open video systems. 76.1504 Section 76.1504 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76...
Song, Joongseok; Kim, Changseob; Park, Hanhoon; Park, Jong-Il
We propose a practical system that can effectively mix the depth data of real and virtual objects by using a Z buffer and can quickly generate digital mixed reality video holograms by using multiple graphic processing units (GPUs). In an experiment, we verify that real objects and virtual objects can be merged naturally in free viewing angles, and the occlusion problem is well handled. Furthermore, we demonstrate that the proposed system can generate mixed reality video holograms at 7.6 frames per second. Finally, the system performance is objectively verified by users' subjective evaluations.
Ichikawa, Kotaro; Akamatsu, Tomonari; Shinke, Tomio; Sasamori, Kotoe; Miyauchi, Yukio; Abe, Yuki; Adulyanukosol, Kanjana; Arai, Nobuaki
Dugongs (Dugong dugon) were monitored using simultaneous passive acoustic methods and visual observations in Thai waters during January 2008. Chirp and trill calls were detected by a towed stereo hydrophone array system. Two teams of experienced observers conducted standard visual observations on the same boat. Comparisons of detection probabilities of acoustic and visual monitoring between two independent observers were calculated. Acoustic and visual detection probabilities were 15.1% and 15.7%, respectively, employing a 300 s matching time interval. When conspecific chirp calls were broadcast from an underwater speaker deployed on the side of the observation boat, the detection probability of acoustic monitoring rose to 19.2%. The visual detection probability was 12.5%. Vocal hot spots characterized by frequent acoustic detection of calls were suggested by dispersion analysis, while dugongs were visually observed constantly throughout the focal area (pmonitoring assisted the survey since detection performance similar to that of experienced visual observers was shown. Playback of conspecific chirps appeared to increase the detection probability, which could be beneficial for future field surveys using passive acoustics in order to ensure the attendance of dugongs in the focal area.
Chen, Chien-Hsu; Chou, Yin-Ju
This study focuses on development of augmented video system on traditional picture postcards. The system will provide users to print out the augmented reality marker on the sticker to stick on the picture postcard, and it also allows users to record their real time image and video to augment on that stick marker. According dynamic image, users can share travel moods, greeting, and travel experience to their friends. Without changing in the traditional picture postcards, we develop augmented video system on them by augmented reality (AR) technology. It not only keeps the functions of traditional picture postcards, but also enhances user's experience to keep the user's memories and emotional expression by augmented digital media information on them.
Full Text Available Future wireless video transmission systems will consider orthogonal frequency division multiplexing (OFDM as the basic modulation technique due to its robustness and low complexity implementation in the presence of frequency-selective channels. Recently, adaptive bit loading techniques have been applied to OFDM showing good performance gains in cable transmission systems. In this paper a multilayer bit loading technique, based on the so called Ã‚Â“ordered subcarrier selection algorithm,Ã‚Â” is proposed and applied to a Hiperlan2-like wireless system at 5 GHz for efficient layered multimedia transmission. Different schemes realizing unequal error protection both at coding and modulation levels are compared. The strong impact of this technique in terms of video quality is evaluated for MPEG-4 video transmission.
Rothkrantz, L.; Lefter, I.
The paper describes a surveillance system of cameras installed at lamppost of a military area. The surveillance system has been designed to detect unwanted visitors or suspicious behaviors. The area is composed of streets, building blocks and surrounded by gates and water. The video recordings are
Ecliptic Enterprises Corporation, headquartered in Pasadena, California, provided onboard video systems for rocket and space shuttle launches before it was tasked by Ames Research Center to craft the Data Handling Unit that would control sensor instruments onboard the Lunar Crater Observation and Sensing Satellite (LCROSS) spacecraft. The technological capabilities the company acquired on this project, as well as those gained developing a high-speed video system for monitoring the parachute deployments for the Orion Pad Abort Test Program at Dryden Flight Research Center, have enabled the company to offer high-speed and high-definition video for geosynchronous satellites and commercial space missions, providing remarkable footage that both informs engineers and inspires the imagination of the general public.
Yang, Fan; Ma, Chunting; Li, Haoyi
The design of a wireless video transmission system based on STM32, the system uses the STM32F103VET6 microprocessor as the core, through the video acquisition module collects video data, video data will be sent to the receiver through the wireless transmitting module, receiving data will be displayed on the LCD screen. The software design process of receiver and transmitter is introduced. The experiment proves that the system realizes wireless video transmission function.
Jones, D. P.; Shirey, D. L.; Amai, W. A.
This paper presents a high bandwidth fiber-optic communication system intended for post accident recovery of weapons. The system provides bi-directional multichannel, and multi-media communications. Two smaller systems that were developed as direct spin-offs of the larger system are also briefly discussed.
Jones, D.P.; Shirey, D.L.; Amai, W.A.
This paper presents a high bandwidth fiber-optic communication system intended for post accident recovery of weapons. The system provides bi-directional multichannel, and multi-media communications. Two smaller systems that were developed as direct spin-offs of the larger system are also briefly discussed.
... COMMISSION In the Matter of Certain Video Analytics Software, Systems, Components Thereof, and Products... analytics software, systems, components thereof, and products containing same by reason of infringement of... after importation of certain video analytics software, systems, components thereof, and products...
... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... analytics software, systems, components thereof, and products containing same by reason of infringement of... after importation of certain video analytics software, systems, components thereof, and products...
... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... Trade Commission has received a complaint entitled Certain Video Analytics Software, Systems, Components... analytics software, systems, components thereof, and products containing same. The complaint names as...
Favrot, Sylvain Emmanuel; Marschall, Marton; Käsbach, Johannes
Planar (2D) and periphonic (3D) higher-order Ambisonics (HOA) systems are widely used to reproduce spatial properties of acoustic scenarios. Mixed-order Ambisonics (MOA) systems combine the benet of higher order 2D systems, i.e. a high spatial resolution over a larger usable frequency bandwidth...... in the horizontal plane and the usable frequency bandwidth for playback as well as recording. Hence the described MOA scheme provides a promising method for improving the performance of current 3D sound reproduction systems....
Gramss, Denise; Struve, Doreen
The study reported in this paper investigated the usefulness of different instructions for guiding inexperienced older adults through interactive systems. It was designed to compare different media in relation to their social as well as their motivational impact on the elderly during the learning process. Precisely, the video was compared with…
Glazkov, V. D.; Goretov, Iu. M.; Rozhavskii, E. I.; Shcherbakov, V. V.
The self-correcting video section of the satellite-borne Fragment multispectral scanning system is described. This section scheme makes possible a sufficiently efficient equalization of the transformation coefficients of all the measuring sections in the presence of a reference-radiation source and a single reference time interval for all the sections.
This paper describes The Freedom Theatre's Freedom Bus initiative and its use of Playback Theatre for community mobilisation and cultural activism within Occupied Palestine. Utilising a conflict transformation perspective, conventional dialogue-oriented initiatives are contrasted against interventions that pursue concientisation and alliance…
Kano, Fumihiro; Hirata, Satoshi; Deschner, Tobias; Behringer, Verena; Call, Josep
Emotion is one of the central topics in animal studies and is likely to attract attention substantially in the coming years. Recent studies have developed a thermo-imaging technique to measure the facial skin temperature in the studies of emotion in humans and macaques. Here we established the procedures and techniques needed to apply the same technique to great apes. We conducted two experiments respectively in the two established research facilities in Germany and Japan. Total twelve chimpanzees were tested in three conditions in which they were presented respectively with the playback sounds (Exp. 1) or the videos (Exp. 2) of fighting conspecifics, control sounds/videos (allospecific display call: Exp. 1; resting conspecifics: Exp. 2), and no sound/image. Behavioral, hormonal (salivary cortisol) and heart-rate responses were simultaneously recorded. The nasal temperature of chimpanzees linearly dropped up to 1.5 °C in 2 min, and recovered to the baseline in 2 min, in the experimental but not control conditions. We found the related changes in excitement behavior and heart-rate variability, but not in salivary cortisol, indicating that overall responses were involved with the activities of sympathetic nervous system but not with the measureable activities of the hypothalamus-pituitary-adrenal (HPA) axis. The influence of general activity (walking, eating) was not negligible but controllable in experiments. We propose several techniques to control those confounding factors. Overall, thermo-imaging is a promising technique that should be added to the traditional physiological and behavioral measures in primatology and comparative psychology. Copyright © 2015 Elsevier Inc. All rights reserved.
Full Text Available In this work we introduce a simple client-server system architecture and algorithms for ubiquitous live video and VOD service support. The main features of the system are: efficient usage of network resources, emphasis on user personalization, and ease of implementation. The system supports many continuous service requirements such as QoS provision, user mobility between networks and between different communication devices, and simultaneous usage of a device by a number of users.
Full Text Available An object-based video authentication system, which combines watermarking, error correction coding (ECC, and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI.
Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian
Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.
Kong, Hyoun-Joong; Seo, Jong Mo; Hwang, Jeong Min; Kim, Hee Chan
Binocular indirect ophthalmoscope (BIO) provides a wider view of fundus with stereopsis contrary to the direct one. Proposed system is composed of portable BIO and 3D viewing unit. The illumination unit of BIO utilized high flux LED as a light source, LED condensing lens cap for beam focusing, color filters and small lithium ion battery. In optics unit of BIO, beam splitter was used to distribute an examinee's fundus image both to examiner's eye and to CMOS camera module attached to device. Captured retinal video stream data from stereo camera modules were sent to PC through USB 2.0 connectivity. For 3D viewing, two video streams having parallax between them were aligned vertically and horizontally and made into side-by-side video stream for cross-eyed stereoscopy. And the data were converted into autostereoscopic video stream using vertical interlacing for stereoscopic LCD which has glass 3D filter attached to the front side of it. Our newly devised system presented the real-time 3-D view of fundus to assistants with less dizziness than cross-eyed stereoscopy. And the BIO showed good performance compared to conventional portable BIO (Spectra Plus, Keeler Limited, Windsor, UK).
Chow, John W.; Carlton, Les G.; Ekkekakis, Panteleimon; Hay, James G.
Discusses advantages of a video-based, digitized image system for the study and analysis of projectile motion in the physics laboratory. Describes the implementation of a web-based digitized video system. (WRM)
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... the United States after importation of certain video analytics software systems, components thereof...
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Investigations: Terminations, Modifications and Rulings: Certain Video Game Systems and... United States after importation of certain video game systems and controllers by reason of infringement...
Mikulec, Martin; Voznak, Miroslav; Safarik, Jakub; Partila, Pavol; Rozhon, Jan; Mehic, Miralem
The paper deals with presentation of the IVAS system within the 7FP EU INDECT project. The INDECT project aims at developing the tools for enhancing the security of citizens and protecting the confidentiality of recorded and stored information. It is a part of the Seventh Framework Programme of European Union. We participate in INDECT portal and the Interactive Video Audio System (IVAS). This IVAS system provides a communication gateway between police officers working in dispatching centre and police officers in terrain. The officers in dispatching centre have capabilities to obtain information about all online police officers in terrain, they can command officers in terrain via text messages, voice or video calls and they are able to manage multimedia files from CCTV cameras or other sources, which can be interesting for officers in terrain. The police officers in terrain are equipped by smartphones or tablets. Besides common communication, they can reach pictures or videos sent by commander in office and they can respond to the command via text or multimedia messages taken by their devices. Our IVAS system is unique because we are developing it according to the special requirements from the Police of the Czech Republic. The IVAS communication system is designed to use modern Voice over Internet Protocol (VoIP) services. The whole solution is based on open source software including linux and android operating systems. The technical details of our solution are presented in the paper.
Shi, Yu; Yan, Guozheng; Zhu, Bingquan; Liu, Gang
Wireless power transmission (WPT) technology can solve the energy shortage problem of the video capsule endoscope (VCE) powered by button batteries, but the fixed platform limited its clinical application. This paper presents a portable WPT system for VCE. Besides portability, power transfer efficiency and stability are considered as the main indexes of optimization design of the system, which consists of the transmitting coil structure, portable control box, operating frequency, magnetic core and winding of receiving coil. Upon the above principles, the correlation parameters are measured, compared and chosen. Finally, through experiments on the platform, the methods are tested and evaluated. In the gastrointestinal tract of small pig, the VCE is supplied with sufficient energy by the WPT system, and the energy conversion efficiency is 2.8%. The video obtained is clear with a resolution of 320×240 and a frame rate of 30 frames per second. The experiments verify the feasibility of design scheme, and further improvement direction is discussed.
... COMMISSION In the Matter of: Certain Video Game Systems and Controllers; Notice of Investigation AGENCY: U.S... importation, and the sale within the United States after importation of certain video game systems and... after importation of certain video game systems and controllers that infringe one or more of claims 16...
Cormier, Etienne; Cao, Frédéric; Guichard, Frédéric; Viard, Clément
This article presents a system and a protocol to characterize image stabilization systems both for still images and videos. It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel accuracy to determine a homographic deformation between the current frame and a reference position. This model describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as DSC, DSLR and smartphones.
Sun, Jun; Liang, Mingxing; Chen, Weijun; Zhang, Bin
In order to reinforce the measure of vegetable shed's safety, the S3C44B0X is taken as the main processor chip. The embedded hardware platform is built with a few outer-ring chips, and the network server is structured under the Linux embedded environment, and MPEG4 compression and real time transmission are carried on. The experiment indicates that the video monitoring system can guarantee good effect, which can be applied to the safety of vegetable sheds.
Langbehn, Hendrickson Reiter; Ricci, Saulo M. R.; Gonçalves, Marcos A.; Almeida, Jussara Marques; Pappa, Gisele Lobo; Benevenuto, Fabrício
Most online video sharing systems (OVSSs), such as YouTube and Yahoo! Video, have several mechanisms for supporting interactions among users. One such mechanism is the video response feature in YouTube, which allows a user to post a video in response to another video. While increasingly popular, the video response feature opens the opportunity for non-cooperative users to introduce ``content pollution'' into the system, thus causing loss of service effectiveness and credibility as w...
Hanjalic, Alan; Ceccarelli, Marco; Lagendijk, Reginald L.; Biemond, Jan
In the European project SMASH mass-market storage systems for domestic use are under study. Besides the storage technology that is developed in this project, the related objective of user-friendly browsing/query of video data is studied as well. Key issues in developing a user-friendly system are (1) minimizing the user-intervention in preparatory steps (extraction and storage of representative information needed for browsing/query), (2) providing an acceptable representation of the stored video content in view of a higher automation level, (3) the possibility for performing these steps directly on the incoming stream at storage time, and (4) parameter-robustness of algorithms used for these steps. This paper proposes and validate novel approaches for automation of mentioned preparatory phases. A detection method for abrupt shot changes is proposed, using locally computed threshold based on a statistical model for frame-to-frame differences. For the extraction of representative frames (key frames) an approach is presented which distributes a given number of key frames over the sequence depending on content changes in a temporal segment of the sequence. A multimedia database is introduced, able to automatically store all bibliographic information about a recorded video as well as a visual representation of the content without any manual intervention from the user.
In order to make a large amount of video data compression and effectively with limited network bandwidth to transfer smoothly, this article using the MPEG-4 compression technology to compress video stream. In the network transmission, according to the characteristics of video stream, for transmission technology to carry out full analysis and optimization, and combining current network bandwidth status and protocol, to establish one network model with transferring and playback video streaming effectively. Through a combination of these two areas, significantly improved compression and storage of video files and network transmission efficiency, increased video processing power.
Li, Yucheng; Han, Dantao; Yan, Juanli
A wireless video surveillance system based on ARM was designed and implemented in this article. The newest ARM11 S3C6410 was used as the main monitoring terminal chip with the embedded Linux operating system. The video input was obtained by the analog CCD and transferred from analog to digital by the video chip TVP5150. The video was packed by RTP and transmitted by the wireless USB TL-WN322G+ after being compressed by H.264 encoders in S3C6410. Further more, the video images were preprocessed. It can detect the abnormities of the specified scene and the abnormal alarms. The video transmission definition is the standard definition 480P. The video stream can be real-time monitored. The system has been used in the real-time intelligent video surveillance of the specified scene.
Ishikawa, Tomoya; Yamazawa, Kazumasa; Sato, Tomokazu; Ikeda, Sei; Nakamura, Yutaka; Fujikawa, Kazutoshi; Sunahara, Hideki; Yokoya, Naokazu
In this paper, we describe a new telepresence system which enables a user to look around a virtualized real world easily in network environments. The proposed system includes omni-directional video viewers on web browsers and allows the user to look around the omni-directional video contents on the web browsers. The omni-directional video viewer is implemented as an Active-X program so that the user can install the viewer automatically only by opening the web site which contains the omni-directional video contents. The system allows many users at different sites to look around the scene just like an interactive TV using a multi-cast protocol without increasing the network traffic. This paper describes the implemented system and the experiments using live and stored video streams. In the experiment with stored video streams, the system uses an omni-directional multi-camera system for video capturing. We can look around high resolution and high quality video contents. In the experiment with live video streams, a car-mounted omni-directional camera acquires omni-directional video streams surrounding the car, running in an outdoor environment. The acquired video streams are transferred to the remote site through the wireless and wired network using multi-cast protocol. We can see the live video contents freely in arbitrary direction. In the both experiments, we have implemented a view-dependent presentation with a head-mounted display (HMD) and a gyro sensor for realizing more rich presence.
Bower, Matt; Cavanagh, Michael; Moloney, Robyn; Dao, MingMing
This paper reports on how the cognitive, behavioural and affective communication competencies of undergraduate students were developed using an online Video Reflection system. Pre-service teachers were provided with communication scenarios and asked to record short videos of one another making presentations. Students then uploaded their videos to…
AO-AIO 790 BOM CORP MCLEAN VA F/A 17/8 VIDEO AUTOMATIC TARGE T TRACKING SYSTEM (VATTS) OPERATING PROCEO -ETC(U) AUG Go C STAMM J P ORRESTER, J...Tape Transport Number Two TKI Tektronics I/0 Terminal DS1 Removable Disk Storage Unit DSO Fixed Disk Storage Unit CRT Cathode Ray Tube 1-3 THE BDM...file (mark on Mag Tape) AZEL Quick look at Trial Information Program DUPTAPE Allows for duplication of magnetic tapes CA Cancel ( terminates program on
Archetti, Renata; Vacchi, Matteo; Carniel, Sandro; Benetazzo, Alvise
Measuring the location of the shoreline and monitoring foreshore changes through time represent a fundamental task for correct coastal management at many sites around the world. Several authors demonstrated video systems to be an essential tool for increasing the amount of data available for coastline management. These systems typically sample at least once per hour and can provide long-term datasets showing variations over days, events, months, seasons and years. In the past few years, due to the wide diffusion of video cameras at relatively low price, the use of video cameras and of video images analysis for environmental control has increased significantly. Even if video monitoring systems were often used in the research field they are most often applied with practical purposes including: i) identification and quantification of shoreline erosion, ii) assessment of coastal protection structure and/or beach nourishment performance, and iii) basic input to engineering design in the coastal zone iv) support for integrated numerical model validation Here we present the guidelines for the creation of a new video monitoring network in the proximity of the Jesolo beach (NW of the Adriatic Sea, Italy), Within this 10 km-long tourist district several engineering structures have been built in recent years, with the aim of solving urgent local erosion problems; as a result, almost all types of protection structures are present at this site: groynes, detached breakwaters.The area investigated experienced severe problems of coastal erosion in the past decades, inclusding a major one in the last November 2012. The activity is planned within the framework of the RITMARE project, that is also including other monitoring and scientific activities (bathymetry survey, waves and currents measurements, hydrodynamics and morphodynamic modeling). This contribution focuses on best practices to be adopted in the creation of the video monitoring system, and briefly describes the
Full Text Available We investigate the video assignment problem of a hierarchical Video-on-Demand (VOD system in heterogeneous environments where different quality levels of videos can be encoded using either replication or layering. In such systems, videos are delivered to clients either through a proxy server or video broadcast/unicast channels. The objective of our work is to determine the appropriate coding strategy as well as the suitable delivery mechanism for a specific quality level of a video such that the overall system blocking probability is minimized. In order to find a near-optimal solution for such a complex video assignment problem, an evolutionary approach based on genetic algorithm (GA is proposed. From the results, it is shown that the system performance can be significantly enhanced by efficiently coupling the various techniques.
White, Preston, III
Kennedy Space Center has the need for economical transmission of two multiplexed video signals along multimode fiberoptic systems. These systems must span unusual distances and must meet RS-250B short-haul standards after reception. Bandwidth is a major problem and studies of the installed fibers, available LEDs and PINFETs led to the choice of 100 MHz as the upper limit for the system bandwidth. Optical multiplexing and digital transmission were deemed inappropriate. Three electrical multiplexing schemes were chosen for further study. Each of the multiplexing schemes included an FM stage to help meet the stringent S/N specification. Both FM and AM frequency division multiplexing methods were investigated theoretically and these results were validated with laboratory tests. The novel application of quadrature amplitude multiplexing was also considered. Frequency division multiplexing of two wideband FM video signal appears the most promising scheme although this application requires high power highly linear LED transmitters. Futher studies are necessary to determine if LEDs of appropriate quality exist and to better quantify performance of QAM in this application.
Full Text Available Abstract Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, high-quality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo- and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.
Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried
High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.
Wijnants, Maarten; Van Erum, Kris; QUAX, Peter; Lamotte, Wim
Video consumption has since the emergence of the medium largely been a passive affair. This paper proposes augmented Omni-Directional Video (ODV) as a novel format to engage viewers and to open up new ways of interacting with video content. Augmented ODV blends two important contemporary technologies: Augmented Video Viewing and 360 degree video. The former allows for the addition of interactive features to Web-based video playback, while the latter unlocks spatial video navigation opportunit...
Sandy, C. L. M.; Meiyanti, R.
A measurement of height is comparing the value of the magnitude of an object with a standard measuring tool. The problems that exist in the measurement are still the use of a simple apparatus in which one of them is by using a meter. This method requires a relatively long time. To overcome these problems, this research aims to create software with image processing that is used for the measurement of height. And subsequent that image is tested, where the object captured by the video camera can be known so that the height of the object can be measured using the learning method of Otsu. The system was built using Delphi 7 of Vision Lab VCL 4.5 component. To increase the quality of work of the system in future research, the developed system can be combined with other methods.
Giaccone, Agnese; Solli, Piergiorgio; Bertolaccini, Luca
The magnetic anchoring guidance system (MAGS) is one of the most promising technological innovations in minimally invasive surgery and consists in two magnetic elements matched through the abdominal or thoracic wall. The internal magnet can be inserted into the abdominal or chest cavity through a small single incision and then moved into position by manipulating the external component. In addition to a video camera system, the inner magnetic platform can house remotely controlled surgical tools thus reducing instruments fencing, a serious inconvenience of the uniportal access. The latest prototypes are equipped with self-light-emitting diode (LED) illumination and a wireless antenna for signal transmission and device controlling, which allows bypassing the obstacle of wires crossing the field of view (FOV). Despite being originally designed for laparoscopic surgery, the MAGS seems to suit optimally the characteristics of the chest wall and might meet the specific demands of video-assisted thoracic surgery (VATS) surgery in terms of ergonomics, visualization and surgical performance; moreover, it involves less risks for the patients and an improved aesthetic outcome.
Yang, Jie Chi; Huang, Yi Ting; Tsai, Chi Cheng; Chung, Ching I.; Wu, Yu Chieh
In recent years, using video as a learning resource has received a lot of attention and has been successfully applied to many learning activities. In comparison with text-based learning, video learning integrates more multimedia resources, which usually motivate learners more than texts. However, one of the major limitations of video learning is…
Terakawa, Yuzo; Ishibashi, Kenichi; Goto, Takeo; Ohata, Kenji
Three-dimensional (3-D) video recording of microsurgery is a more promising tool for presentation and education of microsurgery than conventional two-dimensional video systems, but has not been widely adopted partly because 3-D image processing of previous 3-D video systems is complicated and observers without optical devices cannot visualize the 3-D image. A new technical development for 3-D video presentation of microsurgery is described. Microsurgery is recorded with a microscope equipped with a single high-definition (HD) video camera. This 3-D video system records the right- and left-eye views of the microscope simultaneously as single HD data with the use of a 3-D camera adapter: the right- and left-eye views of the microscope are displayed separately on the right and left sides, respectively. The operation video is then edited with video editing software so that the right-eye view is displayed on the left side and left-eye view is displayed on the right side. Consequently, a 3-D video of microsurgery can be created by viewing the edited video by the cross-eyed stereogram viewing method without optical devices. The 3-D microsurgical video provides a more accurate view, especially with regard to depth, and a better understanding of microsurgical anatomy. Although several issues are yet to be addressed, this 3-D video system is a useful method of recording and presenting microsurgery for 3-D viewing with currently available equipment, without optical devices.
Bhattacharya, Sharmila; Inan, Omer; Kovacs, Gregory; Etemadi, Mozziyar; Sanchez, Max; Marcu, Oana
populations in terrestrial experiments, and could be especially useful in field experiments in remote locations. Two practical limitations of the system should be noted: first, only walking flies can be observed - not flying - and second, although it enables population studies, tracking individual flies within the population is not currently possible. The system used video recording and an analog circuit to extract the average light changes as a function of time. Flies were held in a 5-cm diameter Petri dish and illuminated from below by a uniform light source. A miniature, monochrome CMOS (complementary metal-oxide semiconductor) video camera imaged the flies. This camera had automatic gain control, and this did not affect system performance. The camera was positioned 5-7 cm above the Petri dish such that the imaging area was 2.25 sq cm. With this basic setup, still images and continuous video of 15 flies at one time were obtained. To reduce the required data bandwidth by several orders of magnitude, a band-pass filter (0.3-10 Hz) circuit compressed the video signal and extracted changes in image luminance over time. The raw activity signal output of this circuit was recorded on a computer and digitally processed to extract the fly movement "events" from the waveform. These events corresponded to flies entering and leaving the image and were used for extracting activity parameters such as inter-event duration. The efficacy of the system in quantifying locomotor activity was evaluated by varying environmental temperature, then measuring the activity level of the flies.
Full Text Available Recent technological progress offers the opportunity to significantly transform conventional open surgical procedures in ways that allow minimally invasive surgery (MIS to be accomplished by specific operative instruments’ entry into the body through key-sized holes rather than large incisions. Although MIS offers an opportunity for less trauma and quicker recovery, thereby reducing length of hospital stay and attendant costs, the complex nature of this procedure makes it difficult to master, not least because of the limited work area and constricted degree of freedom. Accordingly, this research seeks to design a Teach and Playback device that can aid surgical training by key-framing and then reproducing surgical motions. The result is an inexpensive and portable Teach and Playback laparoscopic training device that can record a trainer’s surgical motions and then play them back for trainees. Indeed, such a device could provide a training platform for surgical residents generally and would also be susceptible of many other applications for other robot-assisted tasks that might require complex motion training and control.
Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang
In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.
Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.
Д В Сенашенко
Full Text Available The article describes distant learning systems used in world practice. The author gives classification of video communication systems. Aspects of using Skype software in Russian Federation are discussed. In conclusion the author provides the review of modern production video conference systems used as tools for distant learning.
... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... certain video analytics software, systems, components thereof, and products containing same by reason of..., Inc. The remaining respondents are Bosch Security Systems, Inc.; Robert Bosch GmbH; Bosch...
... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... States after importation of certain video analytics software, systems, components thereof, and products...; Bosch Security Systems, Inc. of Fairpoint, New York; Samsung Techwin Co., Ltd. of Seoul, Korea; Samsung...
Jordaan, Odia; Coetzee, Marié-Heleen
This article explores the ways in which playback theatre was used to interrogate the views of adolescents on their social context(s) and establish what the personal and dominant discourses operating in their views were. Playback theatre, with its focus on reframing personal stories to generate new perspectives on these stories, was an appropriate…
Full Text Available This work presents a novel indoor video surveillance system, capable of detecting the falls of humans. The proposed system can detect and evaluate human posture as well. To evaluate human movements, the background model is developed using the codebook method, and the possible position of moving objects is extracted using the background and shadow eliminations method. Extracting a foreground image produces more noise and damage in this image. Additionally, the noise is eliminated using morphological and size filters and this damaged image is repaired. When the image object of a human is extracted, whether or not the posture has changed is evaluated using the aspect ratio and height of a human body. Meanwhile, the proposed system detects a change of the posture and extracts the histogram of the object projection to represent the appearance. The histogram becomes the input vector of K-Nearest Neighbor (K-NN algorithm and is to evaluate the posture of the object. Capable of accurately detecting different postures of a human, the proposed system increases the fall detection accuracy. Importantly, the proposed method detects the posture using the frame ratio and the displacement of height in an image. Experimental results demonstrate that the proposed system can further improve the system performance and the fall down identification accuracy.
Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.
FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and verified video and interface functions from a standard video framework are utilized to significantly reduce development and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a Nios-II processor using Altera's Avalon Memory Mapped protocol.
Wen, Ming; Hu, Haibo
To meet the demands of high definition of video and transmission at real-time during the surgery of endoscope, this paper designs an HD mobile video transmission system. This system uses H.264/AVC to encode the original video data and transports it in the network by RTP/RTCP protocol. Meanwhile, the system implements a stable video transmission in portable terminals (such as tablet PCs, mobile phones) under the 3G mobile network. The test result verifies the strong repair ability and stability under the conditions of low bandwidth, high packet loss rate, and high delay and shows a high practical value.
Walton, James S.; Hallamasek, Karen G.
The value of high-speed imaging for making subjective assessments is widely recognized, but the inability to acquire useful data from image sequences in a timely fashion has severely limited the use of the technology. 4DVideo has created a foundation for a generic instrument that can capture kinematic data from high-speed images. The new system has been designed to acquire (1) two-dimensional trajectories of points; (2) three-dimensional kinematics of structures or linked rigid-bodies; and (3) morphological reconstructions of boundaries. The system has been designed to work with an unlimited number of cameras configured as nodes in a network, with each camera able to acquire images at 1000 frames per second (fps) or better, with a spatial resolution of 512 X 512 or better, and an 8-bit gray scale. However, less demanding configurations are anticipated. The critical technology is contained in the custom hardware that services the cameras. This hardware optimizes the amount of information stored, and maximizes the available bandwidth. The system identifies targets using an algorithm implemented in hardware. When complete, the system software will provide all of the functionality required to capture and process video data from multiple perspectives. Thereafter it will extract, edit and analyze the motions of finite targets and boundaries.
Allin, Thomas Højgaard; Neubert, Torsten; Laursen, Steen
at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over...
The digital audio signal processor (DSP) TC9446F series has been developed silicon audio playback devices with a memory medium of, e.g., flash memory, DVD players, and AV devices, e.g., TV sets. It corresponds to AAC (advanced audio coding) (2ch) and MP3 (MPEG1 Layer3), as the audio compressing techniques being used for transmitting music through an internet. It also corresponds to compressed types, e.g., Dolby Digital, DTS (digital theater system) and MPEG2 audio, being adopted for, e.g., DVDs. It can carry a built-in audio signal processing program, e.g., Dolby ProLogic, equalizer, sound field controlling, and 3D sound. TC9446XB has been lined up anew. It adopts an FBGA (fine pitch ball grid array) package for portable audio devices. (translated by NEDO)
Full Text Available This work presents a fall detection system that is based on image processing technology. The system can detect falling by various humans via analysis of video frame. First, the system utilizes the method of mixture and Gaussian background model to generate information about the background, and the noise and shadow of background are eliminated to extract the possible positions of moving objects. The extraction of a foreground image generates more noise and damage. Therefore, morphological and size filters are utilized to eliminate this noise and repair the damage to the image. Extraction of the foreground image yields the locations of human heads in the image. The median point, height, and aspect ratio of the people in the image are calculated. These characteristics are utilized to trace objects. The change of the characteristics of objects among various consecutive images can be used to evaluate those persons enter or leave the scene. The method of fall detection uses the height and aspect ratio of the human body, analyzes the image in which one person overlaps with another, and detects whether a human has fallen or not. Experimental results demonstrate that the proposed method can efficiently detect falls by multiple persons.
de Barros, Rui Sergio Monteiro; Brito, Marcus Vinicius Henriques; de Brito, Marcelo Houat; de Aguiar Lédo Coutinho, Jean Vitor; Teixeira, Renan Kleber Costa; Yamaki, Vitor Nagai; da Silva Costa, Felipe Lobato; Somensi, Danusa Neves
The surgical microscope is an essential tool for microsurgery. Nonetheless, several promising alternatives are being developed, including endoscopes and laparoscopes with video systems. However, these alternatives have only been used for arterial anastomoses so far. The aim of this study was to evaluate the use of a low-cost video-assisted magnification system in end-to-side neurorrhaphy in rats. Forty rats were randomly divided into four matched groups: (1) normality (sciatic nerve was exposed but was kept intact); (2) denervation (fibular nerve was sectioned, and the proximal and distal stumps were sutured-transection without repair); (3) microscope; and (4) video system (fibular nerve was sectioned; the proximal stump was buried inside the adjacent musculature, and the distal stump was sutured to the tibial nerve). Microsurgical procedures were performed with guidance from a microscope or video system. We analyzed weight, nerve caliber, number of stitches, times required to perform the neurorrhaphy, muscle mass, peroneal functional indices, latency and amplitude, and numbers of axons. There were no significant differences in weight, nerve caliber, number of stitches, muscle mass, peroneal functional indices, or latency between microscope and video system groups. Neurorrhaphy took longer using the video system (P microscope group than in the video group. It is possible to perform an end-to-side neurorrhaphy in rats through video system magnification. The success rate is satisfactory and comparable with that of procedures performed under surgical microscopes. Copyright © 2017 Elsevier Inc. All rights reserved.
Recent years have seen significant investment and increasingly effective use of Video Analytics (VA) systems to detect intrusion or attacks in sterile areas. Currently there are a number of manufacturers who have achieved the Imagery Library for Intelligent Detection System (i-LIDS) primary detection classification performance standard for the sterile zone detection scenario. These manufacturers have demonstrated the performance of their systems under evaluation conditions using an uncompressed evaluation video. In this paper we consider the effect on the detection rate of an i-LIDS primary approved sterile zone system using compressed sterile zone scenario video clips as the input. The preliminary test results demonstrate a change time of detection rate with compression as the time to alarm increased with greater compression. Initial experiments suggest that the detection performance does not linearly degrade as a function of compression ratio. These experiments form a starting point for a wider set of planned trials that the Home Office will carry out over the next 12 months.
Gestich, Carla C; Caselli, Christini B; Nagy-Reis, Mariana B; Setz, Eleonore Z F; da Cunha, Rogério G T
Accurate measures of animal population densities are essential to assess their status, demography, and answer ecological questions. Among several methods proposed to collect abundance data, line transect sampling is used the most. The assumptions required to obtain accurate density estimates through this method, however, are rarely met when studying primates. As most primate species are vocally active, density estimates can be improved by associating transect sampling with playback point counts to scan the entire study area. Yet, attention to playback procedure and data collection design is necessary. Here, we describe a protocol to assess primate densities using playback and test its application on surveys of Callicebus nigrifrons, a small Neotropical primate that shows site fidelity and active vocal behavior. We list important steps and discuss precautions that should be considered, from the adjustments in the recordings in the lab to field procedures in the playback broadcasting sessions. Prior to the surveys, we conducted playback trials with three habituated wild groups at three forest remnants to test their response to the playback stimuli at different distances. Based on these trials, we defined the radius distance covered by the playback sessions. Then, we conducted two surveys in 12 forest remnants, in the northeast of São Paulo State Brazil. The results of density estimates were consistent between the two surveys. As the playback survey protocol we described has proved to be a simple and useful tool for surveying vocal primate and generated reliable data, we suggest that it is a good alternative method to estimate density of species, particularly for those that are responsive to playbacks and show site fidelity. © 2016 Wiley Periodicals, Inc.
..., ``Nintendo''). The products accused of infringing the asserted patents are gaming systems and related... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Wireless Controllers and Components Thereof; Commission...
National Aeronautics and Space Administration — In this project, the development of a novel panoramic, stereoscopic video system was proposed. The proposed system, which contains no moving parts, uses three-fixed...
Yamada, Takaaki; Echizen, Isao; Tezuka, Satoru; Yoshiura, Hiroshi
Emerging broadband networks and high performance of PCs provide new business opportunities of the live video streaming services for the Internet users in sport events or in music concerts. Digital watermarking for video helps to protect the copyright of the video content and the real-time processing is an essential requirement. For the small start of new business, it should be achieved by flexible software without special equipments. This paper describes a novel real-time watermarking system implemented on a commodity PC. We propose the system architecture and methods to shorten watermarking time by reusing the estimated watermark imperceptibility among neighboring frames. A prototype system enables real time processing in a series of capturing NTSC signals, watermarking the video, encoding it to MPEG4 in QGVA, 1Mbps, 30fps style and storing the video for 12 hours in maximum
Ramezani, Mohsen; Yaghmaee, Farzin
In recent years, fast growth of online video sharing eventuated new issues such as helping users to find their requirements in an efficient way. Hence, Recommender Systems (RSs) are used to find the users' most favorite items. Finding these items relies on items or users similarities. Though, many factors like sparsity and cold start user impress the recommendation quality. In some systems, attached tags are used for searching items (e.g. videos) as personalized recommendation. Different views, incomplete and inaccurate tags etc. can weaken the performance of these systems. Considering the advancement of computer vision techniques can help improving RSs. To this end, content based search can be used for finding items (here, videos are considered). In such systems, a video is taken from the user to find and recommend a list of most similar videos to the query one. Due to relating most videos to humans, we present a novel low complex scalable method to recommend videos based on the model of included action. This method has recourse to human action retrieval approaches. For modeling human actions, some interest points are extracted from each action and their motion information are used to compute the action representation. Moreover, a fuzzy dissimilarity measure is presented to compare videos for ranking them. The experimental results on HMDB, UCFYT, UCF sport and KTH datasets illustrated that, in most cases, the proposed method can reach better results than most used methods.
Full Text Available The recent development of three dimensional (3D display technologies has resulted in a proliferation of 3D video production and broadcasting, attracting a lot of research into capture, compression and delivery of stereoscopic content. However, the predominant design practice of interactions with 3D video content has failed to address its differences and possibilities in comparison to the existing 2D video interactions. This paper presents a study of user requirements related to interaction with the stereoscopic 3D video. The study suggests that the change of view, zoom in/out, dynamic video browsing, and textual information are the most relevant interactions with stereoscopic 3D video. In addition, we identified a strong demand for object selection that resulted in a follow-up study of user preferences in 3D selection using virtual-hand and ray-casting metaphors. These results indicate that interaction modality affects users’ decision of object selection in terms of chosen location in 3D, while user attitudes do not have significant impact. Furthermore, the ray-casting-based interaction modality using Wiimote can outperform the volume-based interaction modality using mouse and keyboard for object positioning accuracy.
... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. ...
... COMMISSION In the Matter of Certain Video Game Systems and Wireless Controllers and Components Thereof... importation, and the sale within the United States after importation of certain video game systems and... importation of certain video game systems and wireless controllers and components thereof that infringe one or...
Full Text Available Abstract Today's video surveillance systems are increasingly equipped with video content analysis for a great variety of applications. However, reliability and robustness of video content analysis algorithms remain an issue. They have to be measured against ground truth data in order to quantify the performance and advancements of new algorithms. Therefore, a variety of measures have been proposed in the literature, but there has neither been a systematic overview nor an evaluation of measures for specific video analysis tasks yet. This paper provides a systematic review of measures and compares their effectiveness for specific aspects, such as segmentation, tracking, and event detection. Focus is drawn on details like normalization issues, robustness, and representativeness. A software framework is introduced for continuously evaluating and documenting the performance of video surveillance systems. Based on many years of experience, a new set of representative measures is proposed as a fundamental part of an evaluation framework.
Full Text Available Today's video surveillance systems are increasingly equipped with video content analysis for a great variety of applications. However, reliability and robustness of video content analysis algorithms remain an issue. They have to be measured against ground truth data in order to quantify the performance and advancements of new algorithms. Therefore, a variety of measures have been proposed in the literature, but there has neither been a systematic overview nor an evaluation of measures for specific video analysis tasks yet. This paper provides a systematic review of measures and compares their effectiveness for specific aspects, such as segmentation, tracking, and event detection. Focus is drawn on details like normalization issues, robustness, and representativeness. A software framework is introduced for continuously evaluating and documenting the performance of video surveillance systems. Based on many years of experience, a new set of representative measures is proposed as a fundamental part of an evaluation framework.
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
Panayides, A S; Pattichis, M S; Constantinides, A G; Pattichis, C S
The emergence of the new, High Efficiency Video Coding (HEVC) standard, combined with wide deployment of 4G wireless networks, will provide significant support toward the adoption of mobile-health (m-health) medical video communication systems in standard clinical practice. For the first time since the emergence of m-health systems and services, medical video communication systems can be deployed that can rival the standards of in-hospital examinations. In this paper, we provide a thorough overview of today's advancements in the field, discuss existing approaches, and highlight the future trends and objectives.
Rasmussen, Marianne H.; Atem, Ana; Miller, Lee A.
AbstractThe aim of this study was to investigate how wild white-beaked dolphins (Lagenorhynchus albirostris)respond to the playback of novel, anthropogenic sounds. We used amplitude-modulated tones and synthetic pulse-bursts. (Some authors in the literature use the term “burst pulse” meaning...... a burst of pulses or clicks.) The tones were 2 s in duration at frequencies of 100, 200, or 250kHz in three separate playback experiments. The pulse-bursts consisted of 10 different pre-recorded white-beaked dolphin clicks from which one was chosen randomly and repeated at a rate of 300 clicks/s for 2 s...... playbacks were conducted, 123 of which contained sound; the rest were controls. The dolphins responded behaviorally to 90 playbacks with sound. They never responded when we projected the no sound control. The data do not allow assigning specific behavioral responses to specific acoustic stimuli. We also...
Inagaki, Hideaki; Ushida, Takahiro
In aversive or dangerous situations, adult rats emit long characteristic ultrasonic calls, often termed "22-kHz calls," which have been suggested to play a role of alarm calls. Although the playback experiment is one of the most effective ways to investigate the alarming properties of 22-kHz calls, clear behavioral evidence showing the anxiogenic effects of these playback stimuli has not been directly obtained to date. In this study, we investigated whether playback of 22-kHz calls or synthesized sine tones could change the acoustic startle reflex (ASR), enhancement of which is widely considered to be a reliable index of anxiety-related negative affective states in rats. Playback of 22-kHz calls significantly enhanced the ASR in rats. Enhancement effects caused by playback of 22-kHz calls from young rats were relatively weak compared to those after calls from adult rats. Playback of synthesized 25-kHz sine tones enhanced ASR in subjects, but not synthesized 60-kHz tones. Further, shortening the individual call duration of synthesized 25-kHz sine tones also enhanced the ASR. Accordingly, it is suggested that 22-kHz calls induce anxiety by socially communicated alarming signals in rats. The results also demonstrated that call frequency, i.e., of 22kHz, appears important for ultrasonic alarm-signal communication in rats. Copyright © 2016 Elsevier Inc. All rights reserved.
Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.
Mathiak, Krystyna A; Klasen, Martin; Weber, René; Ackermann, Hermann; Shergill, Sukhwinder S; Mathiak, Klaus
.... It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes...
Xia, Zhen-Hua; Wang, Xiao-Shuang
With the rapid development of the electronic technology, multimedia technology and mobile communication technology, video monitoring system is going to the embedded, digital and wireless direction. In this paper, a solution of wireless video monitoring system based on WCDMA is proposed. This solution makes full use of the advantages of 3G, which have Extensive coverage network and wide bandwidth. It can capture the video streaming from the chip's video port, real-time encode the image data by the high speed DSP, and have enough bandwidth to transmit the monitoring image through WCDMA wireless network. The experiments demonstrate that the system has the advantages of high stability, good image quality, good transmission performance, and in addition, the system has been widely used, not be restricted by geographical position since it adopts wireless transmission. So, it is suitable used in sparsely populated, harsh environment scenario.
classic films Ii- into separate FM signals for video dual soundtrack or stereo sound censed from nearlk every major stu- and audio. Another...though never disruptive. While my enthusiasm for the subject was distinctly lim- i’ed. I felt almost as if Iwere in the presence of a histori - cally
Full Text Available Video surveillance systems are based on video and image processing research areas in the scope of computer science. Video processing covers various methods which are used to browse the changes in existing scene for specific video. Nowadays, video processing is one of the important areas of computer science. Two-dimensional videos are used to apply various segmentation and object detection and tracking processes which exists in multimedia content-based indexing, information retrieval, visual and distributed cross-camera surveillance systems, people tracking, traffic tracking and similar applications. Background subtraction (BS approach is a frequently used method for moving object detection and tracking. In the literature, there exist similar methods for this issue. In this research study, it is proposed to provide a more efficient method which is an addition to existing methods. According to model which is produced by using adaptive background subtraction (ABS, an object detection and tracking system’s software is implemented in computer environment. The performance of developed system is tested via experimental works with related video datasets. The experimental results and discussion are given in the study
M. van Persie
Full Text Available During a fire incident live airborne video offers the fire brigade an additional means of information. Essential for the effective usage of the daylight and infra red video data from the UAS is that the information is fully integrated into the crisis management system of the fire brigade. This is a GIS based system in which all relevant geospatial information is brought together and automatically distributed to all levels of the organisation. In the context of the Dutch Fire-Fly project a geospatial video server was integrated with a UAS and the fire brigades crisis management system, so that real-time geospatial airborne video and derived products can be made available at all levels during a fire incident. The most important elements of the system are the Delftdynamics Robot Helicopter, the Video Multiplexing System, the Keystone geospatial video server/editor and the Eagle and CCS-M crisis management systems. In discussion with the Security Region North East Gelderland user requirements and a concept of operation were defined, demonstrated and evaluated. This article describes the technical and operational approach and results.
Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui
Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.
Full Text Available To make people at different places participate in the same conference, speak and discuss freely, the interactive remote video conferencing system is designed and realized based on multi-Agent collaboration. FEC (forward error correction and tree P2P technology are firstly used to build a live conference structure to transfer audio and video data; then the branch conference port can participate to speak and discuss through the application of becoming a interactive focus; the introduction of multi-Agent collaboration technology improve the system robustness. The experiments showed that, under normal network conditions, the system can support 350 branch conference node simultaneously to make live broadcasting. The audio and video quality is smooth. It can carry out large-scale remote video conference.
Full Text Available Surveillance videos contain a considerable amount of data, wherein interesting information to the user is sparsely distributed. Researchers construct video synopsis that contain key information extracted from a surveillance video for efficient browsing and analysis. Geospatial–temporal information of a surveillance video plays an important role in the efficient description of video content. Meanwhile, current approaches of video synopsis lack the introduction and analysis of geospatial-temporal information. Owing to the preceding problems mentioned, this paper proposes an approach called “surveillance video synopsis in GIS”. Based on an integration model of video moving objects and GIS, the virtual visual field and the expression model of the moving object are constructed by spatially locating and clustering the trajectory of the moving object. The subgraphs of the moving object are reconstructed frame by frame in a virtual scene. Results show that the approach described in this paper comprehensively analyzed and created fusion expression patterns between video dynamic information and geospatial–temporal information in GIS and reduced the playback time of video content.
Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.
Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N
We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.
Heckendorn, F.M.; Robinson, C.W.
Specialized miniature low cost video equipment has been effectively used in a number of remote, radioactive, and contaminated environments at the Savannah River Site (SRS). The equipment and related techniques have reduced the potential for personnel exposure to both radiation and physical hazards. The valuable process information thus provided would not have otherwise been available for use in improving the quality of operation at SRS.
Ehlert, Steven; Kingery, Aaron; Suggs, Robert
We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag , and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.
Ehlert, Steven; Kingery, Aaron; Suggs, Robert
We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.
Ferreira, João, E-mail: email@example.com [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)
Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.
Full Text Available Action observation studies have investigated whether changing the speed of the observed movement affects the action observation network. There are two types of speed-changing conditions; one involves “changes in actual movement velocity,” and the other is “manipulation of video speed.” Previous studies have investigated the effects of these conditions separately, but to date, no study has directly investigated the differences between the effects of these conditions. In the “movement velocity condition,” increased velocity is associated with increased muscle activity; however, this change of muscle activities is not shown in the “video speed condition.” Therefore, a difference in the results obtained under these conditions could be considered to reflect a difference in muscle activity of actor in the video. The aim of the present study was to investigate the effects of different speed-changing conditions and spontaneous movement tempo (SMT on the excitability of primary motor cortex (M1 during action observation, as assessed by motor-evoked potentials (MEPs amplitudes induced by transcranial magnetic stimulation (TMS. A total of 29 healthy subjects observed a video clip of a repetitive index or little finger abduction movement under seven different speed conditions. The video clip in the movement velocity condition showed repetitive finger abduction movements made in time with an auditory metronome, at frequencies of 0.5, 1, 2, and 3 Hz. In the video speed condition, playback of the 1-Hz movement velocity condition video clip was modified to show movement frequencies of 0.5, 2, or 3 Hz (Hz-Fake. TMS was applied at the time of maximal abduction and MEPs were recorded from two right-hand muscles. There were no differences in M1 excitability between the movement velocity and video speed conditions. Moreover, M1 excitability did not vary across the speed conditions for either presentation condition. Our findings suggest that changing
Fog, Benedikte; Ulfkjær, Jacob Kanneworff Stigsen; Schlichter, Bjarne Rerup
not sufficiently reflect the theoretical recommendations of using video optimally in a management education. It did not comply with the video learning sequence as introduced by Marx and Frost (1998). However, it questions if the level of cognitive orientation activities can become too extensive. It finds......The study of business information systems has become increasingly important in the Digital Economy. However, it has been found that students have difficulties understanding the practical implications thereof and this leads to a motivational decreases. This study aims to investigate how to optimize...... the use of video to increase comprehension of the practical implications of studying business information systems. This qualitative study is based on observations and focus group interviews with first semester business students. The findings suggest that the video examined in the case study did...
Roberts, Louise; Pérez-Domínguez, Rafael; Elliott, Michael
Free-ranging individual fish were observed using a baited remote underwater video (BRUV) system during sound playback experiments. This paper reports on test trials exploring BRUV design parameters, image analysis and practical experimental designs. Three marine species were exposed to playback noise, provided as examples of behavioural responses to impulsive sound at 163-171dB re 1μPa (peak-to-peak SPL) and continuous sound of 142.7dB re 1μPa (RMS, SPL), exhibiting directional changes and accelerations. The methods described here indicate the efficacy of BRUV to examine behaviour of free-ranging species to noise playback, rather than using confinement. Given the increasing concern about the effects of water-borne noise, for example its inclusion within the EU Marine Strategy Framework Directive, and the lack of empirical evidence in setting thresholds, this paper discusses the use of BRUV, and short term behavioural changes, in supporting population level marine noise management. Copyright © 2016 Elsevier Ltd. All rights reserved.
Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos
In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier
Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.
Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.
Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.
The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.
Araki, Mituhiko; Nakamura, Yuichi; Fujii, Shigeo; Tsuno, Hiroshi
Three international simultaneous lectures of the post graduate level in the field of environmental science and engineering are under preparation in Kyoto University. They are planned to be opened in three Asian universities (Tsinghua University in China, University of Malaya in Malaysia, and Kyoto University in Japan) as formal courses. The contents of the lectures, purpose of the project and technical problems are reported.
Lee, Daren; Pomerantz, Marc
Live monitoring and post-flight analysis of telemetry data play a vital role in the development, diagnosis, and deployment of components of a space flight mission. Requirements for such a system include low end-to-end latency between data producers and visualizers, preserved ordering of messages, data stream archiving with random access playback, and real-time creation of derived data streams. We evaluate the RabbitMQ and Kafka message brokering systems, on how well they can enable a real-time, scalable, and robust telemetry framework that delivers telemetry data to multiple clients across heterogeneous platforms and flight projects. In our experiments using an actively developed robotic arm testbed, Kafka yielded a much higher message throughput rate and a consistent publishing rate across the number of topics and consumers. Consumer message rates were consistent across the number of topics but can exhibit bursty behavior with an increase in the contention for a single topic partition with increasing number of consumers.
Chen Homer H
Full Text Available The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.
Parihar, Vijay; Yadav, Y R; Kher, Yatin; Ratre, Shailendra; Sethi, Ashish; Sharma, Dhananjaya
Steep learning curve is found initially in pure endoscopic procedures. Video telescopic operating monitor (VITOM) is an advance in rigid-lens telescope systems provides an alternative method for learning basics of neuroendoscopy with the help of the familiar principle of microneurosurgery. The aim was to evaluate the clinical utility of VITOM as a learning tool for neuroendoscopy. Video telescopic operating monitor was used 39 cranial and spinal procedures and its utility as a tool for minimally invasive neurosurgery and neuroendoscopy for initial learning curve was studied. Video telescopic operating monitor was used in 25 cranial and 14 spinal procedures. Image quality is comparable to endoscope and microscope. Surgeons comfort improved with VITOM. Frequent repositioning of scope holder and lack of stereopsis is initial limiting factor was compensated for with repeated procedures. Video telescopic operating monitor is found useful to reduce initial learning curve of neuroendoscopy.
渡部, 和雄; 湯瀬, 裕昭; 渡邉, 貴之; 井口, 真彦; 藤田, 広一
The authors have developed a distance education system for interactive education which can transmit 4 video streams between distant lecture rooms. In this paper, we describe the results of our experiments using the system for adult education. We propose some efficient ways to use the system for adult education.
Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki
In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.
Smith, Jemma; Hand, Linda; Dowrick, Peter W.
This study examined the efficacy of video self modeling (VSM) using feedforward, to teach various goals of a picture exchange communication system (PECS). The participants were two boys with autism and one man with Down syndrome. All three participants were non-verbal with no current functional system of communication; the two children had long…
Dow, Ximeng Y; Sullivan, Shane Z; Muir, Ryan D; Simpson, Garth J
A fast (up to video rate) two-photon excited fluorescence lifetime imaging system based on interleaved digitization is demonstrated. The system is compatible with existing beam-scanning microscopes with minor electronics and software modification. Proof-of-concept demonstrations were performed using laser dyes and biological tissue.
Walpitagama, Milanga; Kaslin, Jan; Nugegoda, Dayanthi; Wlodkowic, Donald
The fish embryo toxicity (FET) biotest performed on embryos of zebrafish (Danio rerio) has gained significant popularity as a rapid and inexpensive alternative approach in chemical hazard and risk assessment. The FET was designed to evaluate acute toxicity on embryonic stages of fish exposed to the test chemical. The current standard, similar to most traditional methods for evaluating aquatic toxicity provides, however, little understanding of effects of environmentally relevant concentrations of chemical stressors. We postulate that significant environmental effects such as altered motor functions, physiological alterations reflected in heart rate, effects on development and reproduction can occur at sub-lethal concentrations well below than LC10. Behavioral studies can, therefore, provide a valuable integrative link between physiological and ecological effects. Despite the advantages of behavioral analysis development of behavioral toxicity, biotests is greatly hampered by the lack of dedicated laboratory automation, in particular, user-friendly and automated video microscopy systems. In this work we present a proof-of-concept development of an optical system capable of tracking embryonic vertebrates behavioral responses using automated and vastly miniaturized time-resolved video-microscopy. We have employed miniaturized CMOS cameras to perform high definition video recording and analysis of earliest vertebrate behavioral responses. The main objective was to develop a biocompatible embryo positioning structures that were suitable for high-throughput imaging as well as video capture and video analysis algorithms. This system should support the development of sub-lethal and behavioral markers for accelerated environmental monitoring.
Allin, T.; Neubert, T.; Laursen, S.; Rasmussen, I. L.; Soula, S.
In support for global ELF/VLF observations, HF measurements in France, and conjugate photometry/VLF observations in South Africa, we developed and operated a semi-automatic, remotely controlled video system for the observation of middle-atmospheric transient luminous events (TLEs). Installed at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over serial links from a local computer, and the video outputs were distributed to a pair of PCI frame grabbers in the computer. This setup allowed remote users to log in and operate the system over the internet. Event detection software provided means of recording and time-stamping single TLE video fields and thus eliminated the need for continuous human monitoring of TLE activity. The computer recorded and analyzed two parallel video streams at the full 50 Hz field rate, while uploading status images, TLE images, and system logs to a remote web server. The system detected more than 130 TLEs - mostly sprites - distributed over 9 active evenings. We have thus demonstrated the feasibility of remote agents for TLE observations, which are likely to find use in future ground-based TLE observation campaigns, or to be installed at remote sites in support for space-borne or other global TLE observation efforts.
Chan, Fai; Moon, Yiu-Sang; Chen, Jiansheng; Ma, Yiu-Kwan; Tsang, Wai-Hung; Fu, Kah-Kuen
Low resolution and un-sharp facial images are always captured from surveillance videos because of long human-camera distance and human movements. Previous works addressed this problem by using an active camera to capture close-up facial images without considering human movements and mechanical delays of the active camera. In this paper, we proposed a unified framework to capture facial images in video surveillance systems by using one static and active camera in a cooperative manner. Human faces are first located by a skin-color based real-time face detection algorithm. A stereo camera model is also employed to approximate human face location and his/her velocity with respect to the active camera. Given the mechanical delays of the active camera, the position of a target face with a given delay can be estimated using a Human-Camera Synchronization Model. By controlling the active camera with corresponding amount of pan, tilt, and zoom, a clear close-up facial image of a moving human can be captured then. We built the proposed system in an 8.4-meter indoor corridor. Results show that the proposed stereo camera configuration can locate faces with average error of 3%. In addition, it is capable of capturing facial images of a walking human clearly in first instance in 90% of the test cases.
Lee, June; Yoon, Seo Young; Lee, Chung Hyun
The purposes of the study are to investigate CHLS (Cyber Home Learning System) in online video conferencing environment in primary school level and to explore the students' responses on CHLS-VC (Cyber Home Learning System through Video Conferencing) in order to explore the possibility of using CHLS-VC as a supportive online learning system. The…
de Ridder, Ad C.; Kindt, S.; Frimout, Emmanuel D.; Biemond, Jan; Lagendijk, Reginald L.
The forthcoming introduction of helical scan digital data tape recorders with high access bandwidth and large capacity will facilitate the recording and retrieval of a wide variety of multimedia information from different sources, such as computer data and digital audio and video. For the compression of digital audio and video, the MPEG standard has internationally been accepted. Although helical scan tape recorders can store and playback MPEG compressed signals transparently they are not well suited for carrying out special playback modes, in particular fast forward and fast reverse. Only random portions of a original MPEG bitstream are recovered on fast playback. Unfortunately these shreds of information cannot be interpreted by a standard MPEG decoder, due to loss of synchronization and missing reference pictures. In the EC-sponsored RACE project DART (Digital Data Recorder Terminal) the possibilities for recording and fast playback of MPEG video on a helical scan recorder have been investigated. In the approach we present in this paper, we assume that not transcoding is carried out on the incoming bitstream at recording time, nor that any additional information is recorded. To use the shreds of information for the reconstruction of interpretable pictures, a bitstream validator has been developed to achieve conformance to the MPEG-2 syntax during fast playback. The concept has been validated by realizing hardware demonstrators that connect to a prototype helical scan digital data tape recorder.
Brunner, M; Ittner, W
This paper describes VIPER, the video image-processing system Erlangen. It consists of a general purpose microcomputer, commercially available image-processing hardware modules connected directly to the computer, video input/output-modules such as a TV camera, video recorders and monitors, and a software package. The modular structure and the capabilities of this system are explained. The software is user-friendly, menu-driven and performs image acquisition, transfers, greyscale processing, arithmetics, logical operations, filtering display, colour assignment, graphics, and a couple of management functions. More than 100 image-processing functions are implemented. They are available either by typing a key or by a simple call to the function-subroutine library in application programs. Examples are supplied in the area of biomedical research, e.g. in in-vivo microscopy.
Full Text Available About the video image processing's vehicle detection and counting system research, which has video vehicle detection, vehicle targets' image processing, and vehicle counting function. Vehicle detection is the use of inter-frame difference method and vehicle shadow segmentation techniques for vehicle testing. Image processing functions is the use of color image gray processing, image segmentation, mathematical morphology analysis and image fills, etc. on target detection to be processed, and then the target vehicle extraction. Counting function is to count the detected vehicle. The system is the use of inter-frame video difference method to detect vehicle and the use of the method of adding frame to vehicle and boundary comparison method to complete the counting function, with high recognition rate, fast, and easy operation. The purpose of this paper is to enhance traffic management modernization and automation levels. According to this study, it can provide a reference for the future development of related applications.
Full Text Available This essay examines how tensions between work and play for video game developers shape the worlds they create. The worlds of game developers, whose daily activity is linked to larger systems of experimentation and technoscientific practice, provide insights that transcend video game development work. The essay draws on ethnographic material from over 3 years of fieldwork with video game developers in the United States and India. It develops the notion of creative collaborative practice based on work in the fields of science and technology studies, game studies, and media studies. The importance of, the desire for, or the drive to understand underlying systems and structures has become fundamental to creative collaborative practice. I argue that the daily activity of game development embodies skills fundamental to creative collaborative practice and that these capabilities represent fundamental aspects of critical thought. Simultaneously, numerous interests have begun to intervene in ways that endanger these foundations of creative collaborative practice.
A. L. Oleinik
Full Text Available Subject of Research. The paper deals with the problem of multiple face tracking in a video stream. The primary application of the implemented tracking system is the automatic video surveillance. The particular operating conditions of surveillance cameras are taken into account in order to increase the efficiency of the system in comparison to existing general-purpose analogs. Method. The developed system is comprised of two subsystems: detector and tracker. The tracking subsystem does not depend on the detector, and thus various face detection methods can be used. Furthermore, only a small portion of frames is processed by the detector in this structure, substantially improving the operation rate. The tracking algorithm is based on BRIEF binary descriptors that are computed very efficiently on modern processor architectures. Main Results. The system is implemented in C++ and the experiments on the processing rate and quality evaluation are carried out. MOTA and MOTP metrics are used for tracking quality measurement. The experiments demonstrated the four-fold processing rate gain in comparison to the baseline implementation that processes every video frame with the detector. The tracking quality is on the adequate level when compared to the baseline. Practical Relevance. The developed system can be used with various face detectors (including slow ones to create a fully functional high-speed multiple face tracking solution. The algorithm is easy to implement and optimize, so it may be applied not only in full-scale video surveillance systems, but also in embedded solutions integrated directly into cameras.
Sehairi, Kamal; Chouireb, Fatima; Meunier, Jean
The objective of this study is to compare several change detection methods for a monostatic camera and identify the best method for different complex environments and backgrounds in indoor and outdoor scenes. To this end, we used the CDnet video dataset as a benchmark that consists of many challenging problems, ranging from basic simple scenes to complex scenes affected by bad weather and dynamic backgrounds. Twelve change detection methods, ranging from simple temporal differencing to more sophisticated methods, were tested and several performance metrics were used to precisely evaluate the results. Because most of the considered methods have not previously been evaluated on this recent large scale dataset, this work compares these methods to fill a lack in the literature, and thus this evaluation joins as complementary compared with the previous comparative evaluations. Our experimental results show that there is no perfect method for all challenging cases; each method performs well in certain cases and fails in others. However, this study enables the user to identify the most suitable method for his or her needs.
Geradts, Zeno J.; Merlijn, Menno; de Groot, Gert; Bijhold, Jurrien
The gait parameters of eleven subjects were evaluated to provide data for recognition purposes of subjects. Video images of these subjects were acquired in frontal, transversal, and sagittal (a plane parallel to the median of the body) view. The subjects walked by at their usual walking speed. The measured parameters were hip, knee and ankle joint angle and their time averaged values, thigh, foot and trunk angle, step length and width, cycle time and walking speed. Correlation coefficients within and between subjects for the hip, knee and ankle rotation pattern in the sagittal aspect and for the trunk rotation pattern in the transversal aspect were almost similar. (were similar or were almost identical) This implies that the intra and inter individual variance were equal. Therefore, these gait parameters could not distinguish between subjects. A simple ANOVA with a follow-up test was used to detect significant differences for the mean hip, knee and ankle joint angle, thigh angle, step length, step width, walking speed, cycle time and foot angle. The number of significant differences between subjects defined the usefulness of the gait parameter. The parameter with the most significant difference between subjects was the foot angle (64 % - 73 % of the maximal attainable significant differences), followed by the time average hip joint angle (58 %) and the step length (45 %). The other parameters scored less than 25 %, which is poor for recognition purposes. The use of gait for identification purposes it not yet possible based on this research.
Azer, Samy A; Algrain, Hala A; AlKhelaif, Rana A; AlEshaiwi, Sarah M
A number of studies have evaluated the educational contents of videos on YouTube. However, little analysis has been done on videos about physical examination. This study aimed to analyze YouTube videos about physical examination of the cardiovascular and respiratory systems. It was hypothesized that the educational standards of videos on YouTube would vary significantly. During the period from November 2, 2011 to December 2, 2011, YouTube was searched by three assessors for videos covering the clinical examination of the cardiovascular and respiratory systems. For each video, the following information was collected: title, authors, duration, number of viewers, and total number of days on YouTube. Using criteria comprising content, technical authority, and pedagogy parameters, videos were rated independently by three assessors and grouped into educationally useful and non-useful videos. A total of 1920 videos were screened. Only relevant videos covering the examination of adults in the English language were identified (n=56). Of these, 20 were found to be relevant to cardiovascular examinations and 36 to respiratory examinations. Further analysis revealed that 9 provided useful information on cardiovascular examinations and 7 on respiratory examinations: scoring mean 14.9 (SD 0.33) and mean 15.0 (SD 0.00), respectively. The other videos, 11 covering cardiovascular and 29 on respiratory examinations, were not useful educationally, scoring mean 11.1 (SD 1.08) and mean 11.2 (SD 1.29), respectively. The differences between these two categories were significant (P.86. A small number of videos about physical examination of the cardiovascular and respiratory systems were identified as educationally useful; these videos can be used by medical students for independent learning and by clinical teachers as learning resources. The scoring system utilized by this study is simple, easy to apply, and could be used by other researchers on similar topics.
Yang, Jian; Xie, Xiaofang; Wang, Yan
Based on the AHRS (Attitude and Heading Reference System) and PTZ (Pan/Tilt/Zoom) camera, we designed a video monitoring and tracking system. The overall structure of the system and the software design are given. The key technologies such as serial port communication and head attitude tracking are introduced, and the codes of the key part are given.
Bulan, Orhan; Loce, Robert P.; Wu, Wencheng; Wang, YaoRong; Bernal, Edgar A.; Fan, Zhigang
Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.
Ignacio, Joselito; Center for Homeland Defense and Security Naval Postgraduate School
This proposed system process aims to improve subway safety through better enabling the rapid detection and response to a chemical release in a subway system. The process is designed to be location-independent and generalized to most subway systems despite each system's unique characteristics.
I. Vaishnavi (Ishan); P.S. Cesar Garcia (Pablo Santiago); A.J. Jansen (Jack); B. Gao (Bo); D.C.A. Bulterman (Dick)
htmlabstractThis paper presents a new approach for media presentation continuity in playback mode. We use the term presentation continuity over session transfer since our solution is at the presentation layer. Previous research on this topic has focused on transferring a particular stream or set of
I. Vaishnavi (Ishan); P.S. Cesar Garcia (Pablo Santiago); A.J. Jansen (Jack); B. Gao (Bo); D.C.A. Bulterman (Dick)
htmlabstractThis demo presents a new approach for media presentation continuity in playback mode. We use the term presentation continuity over session transfer since our solution is at the presentation layer. Previous research on this topic has focused on transferring a particular stream or set of
Community-based performance often facilitates participation through story-based processes and in this way could be seen as enacting a form of inclusive democracy. This paper examines a playback theater performance with a refugee and asylum seeker audience and questions whether inclusive, democratic participation can be fostered. It presents a…
Hua, My; Yip, Henry; Talbot, Prue
The objective was to analyse and compare puff and exhalation duration for individuals using electronic nicotine delivery systems (ENDS) and conventional cigarettes in YouTube videos. Video data from YouTube videos were analysed to quantify puff duration and exhalation duration during use of conventional tobacco-containing cigarettes and ENDS. For ENDS, comparisons were also made between 'advertisers' and 'non-advertisers', genders, brands of ENDS, and models of ENDS within one brand. Puff duration (mean =2.4 s) for conventional smokers in YouTube videos (N=9) agreed well with prior publications. Puff duration was significantly longer for ENDS users (mean =4.3 s) (N = 64) than for conventional cigarette users, and puff duration varied significantly among ENDS brands. For ENDS users, puff duration and exhalation duration were not significantly affected by 'advertiser' status, gender or variation in models within a brand. Men outnumbered women by about 5:1, and most users were between 19 and 35 years of age. YouTube videos provide a valuable resource for studying ENDS usage. Longer puff duration may help ENDS users compensate for the apparently poor delivery of nicotine from ENDS. As with conventional cigarette smoking, ENDS users showed a large variation in puff duration (range =1.9-8.3 s). ENDS puff duration should be considered when designing laboratory and clinical trials and in developing a standard protocol for evaluating ENDS performance.
Cheah Wai Shiang
Full Text Available Agent-oriented methodology (AOM is a comprehensive and unified agent methodology for agent-oriented software development. Although AOM is claimed to be able to cope with a complex system development, it is still not yet determined up to what extent this may be true. Therefore, it is vital to conduct an investigation to validate this methodology. This paper presents the adoption of AOM in developing an agent-oriented video surveillance system (VSS. An intruder handling scenario is designed and implemented through AOM. AOM provides an alternative method to engineer a distributed security system in a systematic manner. It presents the security system at a holistic view; provides a better conceptualization of agent-oriented security system and supports rapid prototyping as well as simulation of video surveillance system.
Burner, A. W.; Rummler, D. R.; Goad, W. K.
A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Controllers; Investigations: Terminations, Modifications and Rulings AGENCY: U.S. International Trade Commission. ACTION: Notice. Section 337 of the Tariff Act of 1930...
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Wireless Controllers and Components Thereof, Commission Determination Finding No Violation of the Tariff Act of 1930 AGENCY: U.S. International Trade Commission. ACTION...
Horn, Eva; And Others
Three nonvocal students (ages 5-8) with severe physical handicaps were trained in scan and selection responses (similar to responses needed for operating augmentative communication systems) using a microcomputer-operated video-game format. Results indicated that all three children showed substantial increases in the number of correct responses and…
Pope, Alan T.; Bogart, Edward H.
Describes the Extended Attention Span Training (EAST) system for modifying attention deficits, which takes the concept of biofeedback one step further by making a video game more difficult as the player's brain waves indicate that attention is waning. Notes contributions of this technology to neuropsychology and neurology, where the emphasis is on…
Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.
The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.
The author demonstrates a new system useful for reflective learning. Our new system offers an environment that one can use handwriting tablet devices to bookmark symbolic and descriptive feedbacks into simultaneously recorded videos in the environment. If one uses video recording and feedback check sheets in reflective learning sessions, one can…
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Wireless Controllers and Components Thereof; Notice of Request for... limited exclusion order and a cease and desist order against certain video game systems and wireless...
AKINCI, Gökay; Polat, Ediz; Koçak, Orhan Murat
Eye pupil detection systems have become increasingly popular in image processing and computer vision applications in medical systems. In this study, a video-based eye pupil detection system is developed for diagnosing bipolar disorder. Bipolar disorder is a condition in which people experience changes in cognitive processes and abilities, including reduced attentional and executive capabilities and impaired memory. In order to detect these abnormal behaviors, a number of neuropsychologi...
Cihak, David; Fahrenkrog, Cynthia; Ayres, Kevin M.; Smith, Catherine
This study evaluated the efficacy of video modeling delivered via a handheld device (video iPod) and the use of the system of least prompts to assist elementary-age students with transitioning between locations and activities within the school. Four students with autism learned to manipulate a handheld device to watch video models. An ABAB…
Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien
Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.
Thorsdatter Orvedal Aase, Anne Lene
Full Text Available In this study we used a portable event-triggered video surveillance system for monitoring flower-visiting bumblebees. The system consist of mini digital recorder (mini-DVR with a video motion detection (VMD sensor which detects changes in the image captured by the camera, the intruder triggers the recording immediately. The sensitivity and the detection area are adjustable, which may prevent unwanted recordings. To our best knowledge this is the first study using VMD sensor to monitor flower-visiting insects. Observation of flower-visiting insects has traditionally been monitored by direct observations, which is time demanding, or by continuous video monitoring, which demands a great effort in reviewing the material. A total of 98.5 monitoring hours were conducted. For the mini-DVR with VMD, a total of 35 min were spent reviewing the recordings to locate 75 pollinators, which means ca. 0.35 sec reviewing per monitoring hr. Most pollinators in the order Hymenoptera were identified to species or group level, some were only classified to family (Apidae or genus (Bombus. The use of the video monitoring system described in the present paper could result in a more efficient data sampling and reveal new knowledge to pollination ecology (e.g. species identification and pollinating behaviour.
Bräger, S.; Chong, A.; Dawson, S.; Slooten, E.; Würsig, B.
One reason for the paucity of knowledge of dolphin social structure is the difficulty of measuring individual dolphins. In Hector's dolphins, Cephalorhynchus hectori, total body length is a function of age, and sex can be determined by individual colouration pattern. We developed a novel system combining stereo-photogrammetry and underwater-video to record dolphin group composition. The system consists of two downward-looking single-lens-reflex (SLR) cameras and a Hi8 video camera in an underwater housing mounted on a small boat. Bow-riding Hector's dolphins were photographed and video-taped at close range in coastal waters around the South Island of New Zealand. Three-dimensional, stereoscopic measurements of the distance between the blowhole and the anterior margin of the dorsal fin (BH-DF) were calibrated by a suspended frame with reference points. Growth functions derived from measurements of 53 dead Hector's dolphins (29 female : 24 male) provided the necessary reference data. For the analysis, the measurements were synchronised with corresponding underwater-video of the genital area. A total of 27 successful measurements (8 with corresponding sex) were obtained, showing how this new system promises to be potentially useful for cetacean studies.
Hsu, Chia-chun A.; Ling, Jim; Li, Qing; Kuo, C.-C. J.
The distributed Multiplayer Online Game (MOG) system is complex since it involves technologies in computer graphics, multimedia, artificial intelligence, computer networking, embedded systems, etc. Due to the large scope of this problem, the design of MOG systems has not yet been widely addressed in the literatures. In this paper, we review and analyze the current MOG system architecture followed by evaluation. Furthermore, we propose a clustered-server architecture to provide a scalable solution together with the region oriented allocation strategy. Two key issues, i.e. interesting management and synchronization, are discussed in depth. Some preliminary ideas to deal with the identified problems are described.
Full Text Available A wireless image real-time transmission system is designed by using 3G wireless communication platform and ARM + DSP embedded system. In the environment of 3G networks, the embedded equipment has realized the functions of coding, acquisition, network transmission, decoding and playing. It is realized for real-time video of intelligent control and video compression, storage and playback in the 3G embedded image transmission system. It is especially suitable for remote location or irregular cable network transmission conditions applications. It is shown that in the 3G network video files are transferred quickly. The real-time transmission of H.264 video is broadcasted smoothly, and color distortion is less. The server can control client by remote intelligent units.
Seo, Young-Ho; Lee, Yoon-Hyuk; Koo, Ja-Myung; Kim, Woo-Youl; Yoo, Ji-Sang; Kim, Dong-Wook
We propose a new system that can generate digital holograms using natural color information. The system consists of a camera system for capturing images (object points) and software (S/W) for various image processing. The camera system uses a vertical rig, which is equipped with two depth and RGB cameras and a cold mirror, which has different reflectances according to wavelength for obtaining images with the same viewpoint. The S/W is composed of the engines for processing the captured images and executing computer-generated hologram for generating digital holograms using general-purpose graphics processing units. Each algorithm was implemented using C/C++ and CUDA languages, and all engines in the form of library were integrated in LabView environment. The proposed system can generate about 10 digital holographic frames per second using about 6 K object points.
Amy, Mathieu; Sprau, Philipp; de Goede, Piet; Naguib, Marc
Individuals often differ consistently in behaviour across time and contexts, and such consistent behavioural differences are commonly described as personality. Personality can play a central role in social behaviour both in dyadic interactions and in social networks. We investigated whether explorative behaviour, as proxy of personality of territorial male great tits (Parus major), predicts their own and their neighbours' territorial responses towards simulated intruders. Several weeks prior to playback, subjects were taken from the wild to test their exploratory behaviour in a standard context in the laboratory. Exploratory behaviour provides a proxy of personality along a slow-fast explorer continuum. Upon release, males were radio-tracked and subsequently exposed to interactive playback simulating a more or a less aggressive territorial intruder (by either overlapping or alternating broadcast songs with the subjects' songs). At the same time, we radio-tracked a neighbour of the playback subject. Male vocal responses during playback and spatial movements after playback varied according to male explorative behaviour and playback treatment. Males with lower exploration scores approached the loudspeaker less, and sang more songs, shorter songs and songs with slower element rates than did males with higher exploration scores. Moreover, neighbour responses were related to the explorative behaviour of the subject receiving the playback but not to their own explorative behaviour. Our overall findings reveal for the first time how personality traits affect resource defence within a communication network providing new insights on the cause of variation in resource defence behaviour.
Endo, Chiaki; Sakurada, A; Kondo, T
Recently, endoscopic procedures including surgery, intervention, and examination have been widely performed. Medical practitioners are required to record the procedures precisely in order to check the procedures retrospectively and to get the legally reliable record. Medical Forensic System made by KS Olympus Japan offers 2 kinds of movie and patient's data, such as heart rate, blood pressure, and Spo, which are simultaneously recorded. We installed this system into the bronchoscopy room and have experienced its benefit. Under this system, we can get bronchoscopic image, bronchoscopy room view, and patient's data simultaneously. We can check the quality of the bronchoscopic procedures retrospectively, which is useful for bronchoscopy staff training. Medical Forensic System should be installed in any kind of endoscopic procedures.
Jihwan Park; Youngsun Kong; Yunyoung Nam
In order to remain in focus during head movements, vestibular-ocular reflex causes eyes to move in the opposite direction to head movement. Disorders of vestibular system decrease vision, causing abnormal nystagmus and dizziness. To diagnose abnormal nystagmus, various studies have been reported including the use of rotating chair tests and videonystagmography. However, these tests are unsuitable for home use due to their high costs. Thus, a low-cost video-oculography system is necessary to obtain clinical features at home. In this paper, we present a low-cost video-oculography system using an infrared camera and Raspberry Pi board for tracking the pupils and evaluating a vestibular system. Horizontal eye movement is derived from video data obtained from an infrared camera and infrared light-emitting diodes, and the velocity of head rotation is obtained from a gyroscope sensor. Each pupil was extracted using a morphology operation and a contour detection method. Rotatory chair tests were conducted with our developed device. To evaluate our system, gain, asymmetry, and phase were measured and compared with System 2000. The average IQR errors of gain, phase and asymmetry were 0.81, 2.74 and 17.35, respectively. We showed that our system is able to measure clinical features.
Roh, Mootaek; McHugh, Thomas J; Lee, Kyungmin
To investigate the relationship between neural function and behavior it is necessary to record neuronal activity in the brains of freely behaving animals, a technique that typically involves tethering to a data acquisition system. Optimally this approach allows animals to behave without any interference of movement or task performance. Currently many laboratories in the cognitive and behavioral neuroscience fields employ commercial motorized commutator systems using torque sensors to detect tether movement induced by the trajectory behaviors of animals. In this study we describe a novel motorized commutator system which is automatically controlled by video tracking. To obtain accurate head direction data two light emitting diodes were used and video image noise was minimized by physical light source manipulation. The system calculates the rotation of the animal across a single trial by processing head direction data and the software, which calibrates the motor rotation angle, subsequently generates voltage pulses to actively untwist the tether. This system successfully provides a tether twist-free environment for animals performing behavioral tasks and simultaneous neural activity recording. To the best of our knowledge, it is the first to utilize video tracking generated head direction to detect tether twisting and compensate with a motorized commutator system. Our automatic commutator control system promises an affordable and accessible method to improve behavioral neurophysiology experiments, particularly in mice.
Full Text Available This paper presents an efficient video filtering scheme and its implementation in a field-programmable logic device (FPLD. Since the proposed nonlinear, spatiotemporal filtering scheme is based on order statistics, its efficient implementation benefits from a bit-serial realization. The utilization of both the spatial and temporal correlation characteristics of the processed video significantly increases the computational demands on this solution, and thus, implementation becomes a significant challenge. Simulation studies reported in this paper indicate that the proposed pipelined bit-serial FPLD filtering solution can achieve speeds of up to 97.6 Mpixels/s and consumes 1700 to 2700 logic cells for the speed-optimized and area-optimized versions, respectively. Thus, the filter area represents only 6.6 to 10.5% of the Altera STRATIX EP1S25 device available on the Altera Stratix DSP evaluation board, which has been used to implement a prototype of the entire real-time vision system. As such, the proposed adaptive video filtering scheme is both practical and attractive for real-time machine vision and surveillance systems as well as conventional video and multimedia applications.
M.Sc. (Computer Science) A video conference is an interactive meeting between two or more locations, facilitated by simultaneous two-way video and audio transmissions. People in a video conference, also known as participants, join these video conferences for business and recreational purposes. In a typical video conference, we should properly identify and authenticate every participant in the video conference, if information discussed during the video conference is confidential. This preve...
Ziemke, Robert A.
The objective of the High Resolution, High Frame Rate Video Technology (HHVT) development effort is to provide technology advancements to remove constraints on the amount of high speed, detailed optical data recorded and transmitted for microgravity science and application experiments. These advancements will enable the development of video systems capable of high resolution, high frame rate video data recording, processing, and transmission. Techniques such as multichannel image scan, video parameter tradeoff, and the use of dual recording media were identified as methods of making the most efficient use of the near-term technology.
Lee, Kang Oh; Nakaji, Kei; 中司, 敬
A web-based video direct e-commerce system was developed to solve the problems in the internet shopping and to increase trust in safety and quality of agricultural products from consumers. We found that the newly developed e-commerce system could overcome demerits of the internet shopping and give consumers same effects as purchasing products offline. Producers could have opportunities to explain products and to talk to customers and get increased income because of maintaining a certain numbe...
Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel
Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.
Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel
Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753
Full Text Available Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.
Zhiwei, Jia; Guozheng, Yan; Bingquan, Zhu
Wireless power transmission is considered a practical way of overcoming the power shortage of wireless capsule endoscopy (VCE). However, most patients cannot tolerate the long hours of lying in a fixed transmitting coil during diagnosis. To develop a portable wireless power transmission system for VCE, a compact transmitting coil and a portable inverter circuit driven by rechargeable batteries are proposed. The couple coils, optimized considering the stability and safety conditions, are 28 turns of transmitting coil and six strands of receiving coil. The driven circuit is designed according to the portable principle. Experiments show that the integrated system could continuously supply power to a dual-head VCE for more than 8 h at a frame rate of 30 frames per second with resolution of 320 × 240. The portable VCE exhibits potential for clinical applications, but requires further improvement and tests.
Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton
Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.
Moshirnia, Andrew; Israel, Maya
Despite the increasing popularity of many commercial video games, this popularity is not shared by educational video games. Modified video games, however, can bridge the gap in quality between commercial and education video games by embedding educational content into popular commercial video games. This study examined how different information…
Cai, Lin; Deng, Nianchun; Xiao, Zexin
The cables in anchorage zone of cable-stayed bridge are hidden within the embedded pipe, which leads to the difficulty for detecting the damage of the cables with visual inspection. We have built a detection device based on high-resolution video capture, realized the distance observing of invisible segment of stay cable and damage detection of outer surface of cable in the small volume. The system mainly consists of optical stents and precision mechanical support device, optical imaging system, lighting source, drived motor control and IP camera video capture system. The principal innovations of the device are ⑴A set of telescope objectives with three different focal lengths are designed and used in different distances of the monitors by means of converter. ⑵Lens system is far separated with lighting system, so that the imaging optical path could effectively avoid the harsh environment which would be in the invisible part of cables. The practice shows that the device not only can collect the clear surveillance video images of outer surface of cable effectively, but also has a broad application prospect in security warning of prestressed structures.
Potel, Michael J.; MacKay, Steven A.; Sayre, Richard E.
Extracting quantitative information from movie film and video recordings has always been a difficult process. The Galatea motion analysis system represents an application of some powerful interactive computer graphics capabilities to this problem. A minicomputer is interfaced to a stop-motion projector, a data tablet, and real-time display equipment. An analyst views a film and uses the data tablet to track a moving position of interest. Simultaneously, a moving point is displayed in an animated computer graphics image that is synchronized with the film as it runs. Using a projection CRT and a series of mirrors, this image is superimposed on the film image on a large front screen. Thus, the graphics point lies on top of the point of interest in the film and moves with it at cine rates. All previously entered points can be displayed simultaneously in this way, which is extremely useful in checking the accuracy of the entries and in avoiding omission and duplication of points. Furthermore, the moving points can be connected into moving stick figures, so that such representations can be transcribed directly from film. There are many other tools in the system for entering outlines, measuring time intervals, and the like. The system is equivalent to "dynamic tracing paper" because it is used as though it were tracing paper that can keep up with running movie film. We have applied this system to a variety of problems in cell biology, cardiology, biomechanics, and anatomy. We have also extended the system using photogrammetric techniques to support entry of three-dimensional moving points from two (or more) films taken simultaneously from different perspective views. We are also presently constructing a second, lower-cost, microcomputer-based system for motion analysis in video, using digital graphics and video mixing to achieve the graphics overlay for any composite video source image.
Sun, Peter; Nagata, Shojiro
This paper discusses about some technology breakthroughs to help solve the difficulties that have been clogging the popularity of 3D Stereo. We name this 3DHiVision (3DHV) System Solution. With the advance in technology, modern projection systems and stereo LCD panels have made it possible for a lot more people to enjoy a 3D stereo video experience in a broader range of applications. However, the key limitations to more mainstream applications of 3D video have been the availability of 3D contents and the cost and the complexity of 3D video production, content management and playback systems. With the easy availability of the modern PC based video production tools, advance in the technology of the projection systems and the great interest highly increased in 3D applications, the 3D video industry still remains stagnant and restricted within a small scale. It is because the amount of the cost for the production and playback of high quality 3D video has always been to such an extent that it challenges the limitations of our imagination. Great as these difficulties seem to be, we have surmounted them all and created a complete end-to-end 3DHiVision (3DHV for short) Video system based on an embedded PC platform, which significantly reduces the cost and complexity of creating museum quality 3D video. With this achievement, professional film makers and amateurs as well will be able to easily create, distribute and playback 3D video contents. The HD-Renderer is the central component in our 3DHV solution line. It is a highly efficient software capable of decrypting, decoding, dynamically parallax adjusting and rendering HD video contents up to 1920x1080x2x30p in real-time on an embedded PC (for theaters) or any other home PC (for main stream) with the 3.0GHz P4 CPU / GeForce6600GT GPU hardware requirements or above. And the 1280x720x2x30p contents can be performed with great ease on a notebook with 1.7GHz P4Mobile CPU / GeForce6200 GPU at the time when this paper is written.
Full Text Available Video content has increased much on the Internet during last years. In spite of the efforts of different organizations and governments to increase the accessibility of websites, most multimedia content on the Internet is not accessible. This paper describes a system that contributes to make multimedia content more accessible on the Web, by automatically translating subtitles in oral language to SignWriting, a way of writing Sign Language. This system extends the functionality of a general web platform that can provide accessible web content for different needs. This platform has a core component that automatically converts any web page to a web page compliant with level AA of WAI guidelines. Around this core component, different adapters complete the conversion according to the needs of specific users. One adapter is the Deaf People Accessibility Adapter, which provides accessible web content for the Deaf, based on SignWritting. Functionality of this adapter has been extended with the video subtitle translator system. A first prototype of this system has been tested through different methods including usability and accessibility tests and results show that this tool can enhance the accessibility of video content available on the Web for Deaf people.
Allen, A. J.; Terry, J. L.; Garnier, D.; Stillerman, J. A.; Wurden, G. A.
A new system for routine digitization of video images is presently operating on the Alcator C-Mod tokamak. The PC-based system features high resolution video capture, storage, and retrieval. The captured images are stored temporarily on the PC, but are eventually written to CD. Video is captured from one of five filtered RS-170 CCD cameras at 30 frames per second (fps) with 640×480 pixel resolution. In addition, the system can digitize the output from a filtered Kodak Ektapro EM Digital Camera which captures images at 1000 fps with 239×192 resolution. Present views of this set of cameras include a wide angle and a tangential view of the plasma, two high resolution views of gas puff capillaries embedded in the plasma facing components, and a view of ablating, high speed Li pellets. The system is being used to study (1) the structure and location of visible emissions (including MARFEs) from the main plasma and divertor, (2) asymmetries in gas puff plumes due to flows in the scrape-off layer (SOL), and (3) the tilt and cigar-shaped spatial structure of the Li pellet ablation cloud.
Full Text Available Playback of bird songs is a useful technique for species detection; however, this method is usually not standardized. We tested playback efficiency for five Atlantic Forest birds (White-browed Warbler Basileuterus leucoblepharus, Giant Antshrike Batara cinerea, Swallow-tailed Manakin Chiroxiphia caudata, Whiteshouldered Fire-eye Pyriglena leucoptera and Surucua Trogon Trogon surrucura for different time of the day, season of the year and species abundance at the Morro Grande Forest Reserve (South-eastern Brazil and at thirteen forest fragments in a nearby landscape. Vocalizations were broadcasted monthly at sunrise, noon and sunset, during one year. For B. leucoblepharus, C. caudata and T. surrucura, sunrise and noon were more efficient than sunset. Batara cinerea presented higher efficiency from July to October. Playback expanded the favourable period for avifaunal surveys in tropical forest, usually restricted to early morning in the breeding season. The playback was efficient in detecting the presence of all species when the abundance was not too low. But only B. leucoblepharus and T. surrucura showed abundance values significantly related to this efficiency. The present study provided a precise indication of the best daily and seasonal periods and a confidence interval to maximize the efficiency of playback to detect the occurrence of these forest species.A técnica de play-back é muito útil para a detecção de aves, mas este método geralmente não é padronizado. Sua eficiência em atestar a ocorrência de cinco espécies de aves da Mata Atlântica (Pula-pula-assobiador Basileuterus leucoblepharus, Batará Batara cinerea, Tangará Chiroxiphia caudata, Olho-de-fogo Pyriglena leucoptera e Surucuá-de-barriga-vermelha Trogon surrucura foi analisada de acordo com o horário do dia, estação do ano e abundância das espécies na Reserva Florestal do Morro Grande (São Paulo, Brasil e em treze fragmentos florestais de uma paisagem adjacente
Han, Xiaoguang; Fu, Hongbo; Zheng, Hanlin; Liu, Ligang; Wang, Jue
Stop-motion is a well-established animation technique but is often laborious and requires craft skills. A new video-based system can animate the vast majority of everyday objects in stop-motion style, more flexibly and intuitively. Animators can perform and capture motions continuously instead of breaking them into increments and shooting one still picture per increment. More important, the system permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The system's key component is two-phase keyframe-based capturing and processing, assisted by computer vision techniques. With this system, even amateurs can generate high-quality stop-motion animations.
Full Text Available This paper proposes a camera road sign system of the early warning, which can help to avoid from vehicle collision with the wild animals. The system consists of camera modules placed down the particularly chosen route and the intelligent road signs. The camera module consists of the camera device and the computing unit. The video stream is captured from video camera using computing unit. Then the algorithms of object detection are deployed. Afterwards, the machine learning algorithms will be used to classify the moving objects. If the moving object is classified as animal and this animal can be dangerous for safety of the vehicle, warning will be displayed on the intelligent road sings.
Martin, Benjamin M.; Irwin, Elise R.
We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.
Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher
Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.
Rosvall, Kimberly A; Reichard, Dustin G; Ferguson, Stephen M; Whittaker, Danielle J; Ketterson, Ellen D
Some species of songbirds elevate testosterone in response to territorial intrusions while others do not. The search for a general explanation for this interspecific variation in hormonal response to social challenges has been impeded by methodological differences among studies. We asked whether song playback alone is sufficient to bring about elevation in testosterone or corticosterone in the dark-eyed junco (Junco hyemalis), a species that has previously demonstrated significant testosterone elevation in response to a simulated territorial intrusion when song was accompanied by a live decoy. We studied two populations of juncos that differ in length of breeding season (6-8 vs. 14-16 weeks), and conducted playbacks of high amplitude, long-range song. In one population, we also played low amplitude, short-range song, a highly potent elicitor of aggression in juncos and many songbirds. We observed strong aggressive responses to both types of song, but no detectable elevation of plasma testosterone or corticosterone in either population. We also measured rise in corticosterone in response to handling post-playback, and found full capacity to elevate corticosterone but no effect of song class (long-range or short-range) on elevation. Collectively, our data suggest that males can mount an aggressive response to playback without a change in testosterone or corticosterone, despite the ability to alter these hormones during other types of social interactions. We discuss the observed decoupling of circulating hormones and aggression in relation to mechanisms of behavior and the cues that may activate the HPA and HPG axes. Copyright © 2012 Elsevier Inc. All rights reserved.
Seffer, Dominik; Schwarting, Rainer K W; Wöhr, Markus
Rodent ultrasonic vocalizations (USV) serve as situation-dependent affective signals and convey important communicative functions. In the rat, three major USV types exist: (I) 40-kHz USV, which are emitted by pups during social isolation; (II) 22-kHz USV, which are produced by juvenile and adult rats in aversive situations, including social defeat; and (III) 50-kHz USV, which are uttered by juvenile and adult rats in appetitive situations, including rough-and-tumble play. Here, evidence for a communicative function of 50-kHz USV is reviewed, focusing on findings obtained in the recently developed 50-kHz USV radial maze playback paradigm. Up to now, the following five acoustic stimuli were tested in this paradigm: (A) natural 50-kHz USV, (B) natural 22-kHz USV, (C) artificial 50-kHz sine wave tones, (D) artificial time- and amplitude-matched white noise, and (E) background noise. All studies using the 50-kHz USV radial maze playback paradigm indicate that 50-kHz USV serve a pro-social affiliative function as social contact calls. While playback of the different kinds of acoustic stimuli used so far elicited distinct behavioral response patterns, 50-kHz USV consistently led to social approach behavior in the recipient, indicating that pro-social ultrasonic communication can be studied in a reliable and highly standardized manner by means of the 50-kHz USV radial maze playback paradigm. This appears to be particularly relevant for rodent models of neurodevelopmental disorders, as there is a tremendous need for reliable behavioral assays with face validity to social communication deficits seen in autism and schizophrenia in order to study underlying genetic and neurobiological alterations. Copyright © 2014 Elsevier B.V. All rights reserved.
Celik, Emine; Persson-Waye, Kerstin; Møller, Henrik
The study investigated possible effects of recording/playback technique and experimental method on assessments of annoyance, loudness and unpleasantness. A possible effect of exposure duration was also studied. Sounds were recorded with two different techniques: monophonic and binaural (dummy...... of experiments and interpretation of results. The results also show that long-term annoyance and unpleasantness are poorly predicted by short-duration methods....
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Baseline Behavior of Pilot Whales and their Responses to...N000141210417 LONG-TERM GOALS This project investigates the social ecology and baseline behavior of pilot whales , and their responses to anthropogenic...and estimating a robust quantification of group cohesion Conduct playback experiments to study responses of tagged whales to sounds of killer whales
R. Dulaney, D.; Hopfensperger, M.; Malinowski, R.; Hauptman, J.; Kruger, J M
Background Urinary disorders in cats often require subjective caregiver quantification of clinical signs to establish a diagnosis and monitor therapeutic outcomes. Objective To investigate use of a video recording system (VRS) to better assess and quantify urination behaviors in cats. Animals Eleven healthy cats and 8 cats with disorders potentially associated with abnormal urination patterns. Methods Prospective study design. Litter box urination behaviors were quantified with a VRS for 14 d...
A microprocessor has been used to provide the major control functions in the Telemation/Sandia unattended video surveillance system. The software in the microprocessor provides control of the various hardware components and provides the capability of interactive communications with the operator. This document, in conjunction with the commented source listing, defines the philosophy and function of the software. It is assumed that the reader is familiar with the RCA 1802 COSMAC microprocessor and has a reasonable computer science background.
Xia, Xue; Qiu, Yun; Hu, Lin; Fan, Jingchao; Guo, Xiuming; Zhou, Guomin
International audience; As the proposition of the ‘Internet plus’ concept and speedy progress of new media technology, traditional business have been increasingly shared in the development fruits of the informatization and the networking. Proceeding from the real plant protection demands, the construction of a cloud-based video monitoring system that surveillances diseases and pests in apple orchards has been discussed, aiming to solve the lack of timeliness and comprehensiveness in the contr...
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Behavioral Responses of Naïve Cuvier’s Beaked Whales in...Secondary goals included conducting a killer whale playback that has not been preceded by a sonar playback (as in Tyack et al. 2011) and collecting...information. DTAG audio was sampled at 192 kHz and other sensors at 50 Hz, allowing for a detailed reconstruction of whale behavior before, during, and
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Behavioral Responses of Naïve Cuvier’s Beaked Whales in...cavirostris) to MFA sonar signals. Secondary goals included conducting a killer whale playback that has not been preceded by a sonar playback (as in Tyack...detailed reconstruction of whale behavior before, during, and after sonar transmissions. The tag is attached to the whale with suction cups using a
Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.
Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.
Jubran, Mohammad K; Bansal, Manu; Kondi, Lisimachos P; Grover, Rohan
In this paper, we propose an optimal strategy for the transmission of scalable video over packet-based multiple-input multiple-output (MIMO) systems. The scalable extension of H.264/AVC that provides a combined temporal, quality and spatial scalability is used. For given channel conditions, we develop a method for the estimation of the distortion of the received video and propose different error concealment schemes. We show the accuracy of our distortion estimation algorithm in comparison with simulated wireless video transmission with packet errors. In the proposed MIMO system, we employ orthogonal space-time block codes (O-STBC) that guarantee independent transmission of different symbols within the block code. In the proposed constrained bandwidth allocation framework, we use the estimated end-to-end decoder distortion to optimally select the application layer parameters, i.e., quantization parameter (QP) and group of pictures (GOP) size, and physical layer parameters, i.e., rate-compatible turbo (RCPT) code rate and symbol constellation. Results show the substantial performance gain by using different symbol constellations across the scalable layers as compared to a fixed constellation.
Full Text Available Digital Video Recorder (DVR is a digital video recorder with hard drive storage media. When the capacity of the hard disk runs out. It will provide information to users and if there is no response, it will be overwritten automatically and the data will be lost. The main focus of this paper is to enable recording directly connected to a computer editor. The output of both systems (DVR and Direct Recording will be compared with an objective assessment using the Mean Square Error (MSE and Peak Signal to Noise Ratio (PSNR parameter. The results showed that the average value of MSE Direct Recording dB 797.8556108, 137.4346100 DVR MSE dB and the average value of PSNR Direct Recording and DVR PSNR dB 19.5942333 27.0914258 dB. This indicates that the DVR has a much better output quality than Direct Recording.
Machireddy, Archana; van Santen, Jan; Wilson, Jenny L; Myers, Julianne; Hadders-Algra, Mijna; Xubo Song
Cerebral palsy is a non-progressive neurological disorder occurring in early childhood affecting body movement and muscle control. Early identification can help improve outcome through therapy-based interventions. Absence of so-called "fidgety movements" is a strong predictor of cerebral palsy. Currently, infant limb movements captured through either video cameras or accelerometers are analyzed to identify fidgety movements. However both modalities have their limitations. Video cameras do not have the high temporal resolution needed to capture subtle movements. Accelerometers have low spatial resolution and capture only relative movement. In order to overcome these limitations, we have developed a system to combine measurements from both camera and sensors to estimate the true underlying motion using extended Kalman filter. The estimated motion achieved 84% classification accuracy in identifying fidgety movements using Support Vector Machine.
Full Text Available This paper presents a parallel TBB-CUDA implementation for the acceleration of single-Gaussian distribution model, which is effective for background removal in the video-based fire detection system. In this framework, TBB mainly deals with initializing work of the estimated Gaussian model running on CPU, and CUDA performs background removal and adaption of the model running on GPU. This implementation can exploit the combined computation power of TBB-CUDA, which can be applied to the real-time environment. Over 220 video sequences are utilized in the experiments. The experimental results illustrate that TBB+CUDA can achieve a higher speedup than both TBB and CUDA. The proposed framework can effectively overcome the disadvantages of limited memory bandwidth and few execution units of CPU, and it reduces data transfer latency and memory latency between CPU and GPU.
Sykes, P.W.; Ryman, W.E.; Kepler, C.B.; Hardy, J.W.
The configuration, components, specifications and costs of a state-of-the-art closed-circuit television system with wide application for wildlife research and management are described. The principal system components consist of color CCTV camera with zoom lens, pan/tilt system, infrared illuminator, heavy duty tripod, coaxial cable, coaxitron system, half-duplex equalizing video/control amplifier, timelapse video cassette recorder, color video monitor, VHS video cassettes, portable generator, fuel tank and power cable. This system was developed and used in a study of Mississippi sandhiIl Crane (Grus canadensis pratensis) behaviors during incubation, hatching and fledging. The main advantages of the system are minimal downtime where a complete record of every event, its time of occurrence and duration, are permanently recorded and can be replayed as many times as necessary thereafter to retrieve the data. The system is particularly applicable for studies of behavior and predation, for counting individuals, or recording difficult to observe activities. The system can be run continuously for several weeks by two people, reducing personnel costs. This paper is intended to provide biologists who have litte knowledge of electronics with a system that might be useful to their specific needs. The disadvantages of this system are the initial costs (about $9800 basic, 1990-1991 U.S. dollars) and the time required to playback video cassette tapes for data retrieval, but the playback can be sped up when litte or no activity of interest is taking place. In our study, the positive aspects of the system far outweighed the negative.
Vellekoop, S.J.L.; Abelmann, Leon; Porthun, S.; Lodder, J.C.; Miles, J.J.
Magnetic force microscopy has proven to be a suitable tool for analysis of high-density magnetic recording materials. Comparison of the MFM image of a written signal with the actual read-back signal of the recording system can give valuable insight in the recording properties of both heads and
a com- plete MySQL database, C++ developer tools and the libraries utilized in the development of the system (Boost and Libcrafter), and Wireshark...XE suite has a limit to the allowed size of each database. In order to be scalable, the project had to switch to the MySQL database suite. The...programs that access the database use the MySQL C++ connector, provided by Oracle, and the supplied methods and libraries. 4.4 Flow Generator Chapter 3
Momcilovic, Svetislav; Sousa, Leonel
In this work scalable parallelization methods for computing in real-time the H.264/AVC on multi-cores platforms, such as the most recent Graphical Processing Units (GPUs) and Cell Broadband Engine (Cell/BE), are proposed. By applying the Amdahl's law, the most demanding parts of the video coder were identified and the Single Program Multiple Data and Single Instruction Multiple Data approaches are adopted for achieving real-time processing. In particular, video motion estimation and in-loop deblocking filtering were offloaded to be executed in parallel on either GPUs or Cell/BE Synergistic Processor Elements (SPEs). The limits and advantages of these two architectures when dealing with typical video coding problems, such as data dependencies and large input data are demonstrated. We propose techniques to minimize the impact of branch divergences and branch misprediction, data misalignment, conflicts and non-coalesced memory accesses. Moreover, data dependencies and memory size restrictions are taken into account in order to minimize synchronization and communication time overheads, and to achieve the optimal workload balance given the available multiple cores. Data reusing technique is extensively applied for reducing communication overhead, in order to achieve the maximum processing speedup. Experimental results show that real time H.264/AVC is achieved in both systems by computing 30 frames per second, with a resolution of 720×576 pixels, when full-pixel motion estimation is applied over 5 reference frames and 32×32 search area. When quarter-pixel motion estimation is adopted, real time video coding is obtained on GPU for larger search area and on Cell/BE for smaller search areas.
Fabian, E; Mertz, M; Hofmann, H; Wertheimer, R; Foos, C
The clinical advantages of a scanning laser ophthalmoscope (SLO) and video imaging of fundus pictures are described. Image quality (contrast, depth of field) and imaging possibilities (confocal stop) are assessed. Imaging with different lasers (argon, He-Ne) and changes in imaging rendered possible by confocal alignment of the imaging optics are discussed. Hard copies from video images are still of inferior quality compared to fundus photographs. Methods of direct processing and retrieval of digitally stored SLO video fundus images are illustrated by examples. Modifications for a definitive laser scanning system - in regard to the field of view and the quality of hard copies - are proposed.
Sharma, Shubhankar; Singh, K. John; Priya, M.
From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.
Schneider, Jeffrey C; Ozsecen, Muzaffer Y; Muraoka, Nicholas K; Mancinelli, Chiara; Della Croce, Ugo; Ryan, Colleen M; Bonato, Paolo
Burn contractures are common and difficult to treat. Measuring continuous joint motion would inform the assessment of contracture interventions; however, it is not standard clinical practice. This study examines use of an interactive gaming system to measure continuous joint motion data. To assess the usability of an exoskeleton-based interactive gaming system in the rehabilitation of upper extremity burn contractures. Feasibility study. Eight subjects with a history of burn injury and upper extremity contractures were recruited from the outpatient clinic of a regional inpatient rehabilitation facility. Subjects used an exoskeleton-based interactive gaming system to play 4 different video games. Continuous joint motion data were collected at the shoulder and elbow during game play. Visual analog scale for engagement, difficulty and comfort. Angular range of motion by subject, joint, and game. The study population had an age of 43 ± 16 (mean ± standard deviation) years and total body surface area burned range of 10%-90%. Subjects reported satisfactory levels of enjoyment, comfort, and difficulty. Continuous joint motion data demonstrated variable characteristics by subject, plane of motion, and game. This study demonstrates the feasibility of use of an exoskeleton-based interactive gaming system in the burn population. Future studies are needed that examine the efficacy of tailoring interactive video games to the specific joint impairments of burn survivors. Copyright © 2016 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Li, Hejian; An, Ping; Zhang, Zhaoyang
Three-dimensional (3-D) video brings people strong visual perspective experience, but also introduces large data and complexity processing problems. The depth estimation algorithm is especially complex and it is an obstacle for real-time system implementation. Meanwhile, high-resolution depth maps are necessary to provide a good image quality on autostereoscopic displays which deliver stereo content without the need for 3-D glasses. This paper presents a hardware implementation of a full high-definition (HD) depth estimation system that is capable of processing full HD resolution images with a maximum processing speed of 125 fps and a disparity search range of 240 pixels. The proposed field-programmable gate array (FPGA)-based architecture implements a fusion strategy matching algorithm for efficiency design. The system performs with high efficiency and stability by using a full pipeline design, multiresolution processing, synchronizers which avoid clock domain crossing problems, efficient memory management, etc. The implementation can be included in the video systems for live 3-D television applications and can be used as an independent hardware module in low-power integrated applications.
Roger W Li
Full Text Available UNLABELLED: Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15-61 y; visual acuity: 20/25-20/480, with no manifest ocular disease or nystagmus were recruited and allocated into three intervention groups: action videogame group (n = 10, non-action videogame group (n = 3, and crossover control group (n = 7. Our experiments show that playing video games (both action and non-action games for a short period of time (40-80 h, 2 h/d using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%, positional acuity (16%, spatial attention (37%, and stereopsis (54%. Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy, we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7% and increased processing efficiency (33%. Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia
Joongheon Kim; Eun-Seok Ryu
This paper presents the quality analysis results of high-definition video streaming in two-tiered camera sensor network applications. In the camera-sensing system, multiple cameras sense visual scenes in their target fields and transmit the video streams via IEEE 802.15.3c multigigabit wireless links. However, the wireless transmission introduces interferences to the other links. This paper analyzes the capacity degradation due to the interference impacts from the camera-sensing nodes to the ...
Rui Sergio Monteiro de Barros
Full Text Available Abstract The right femoral vessels of 80 rats were identified and dissected. External lengths and diameters of femoral arteries and femoral veins were measured using either a microscope or a video magnification system. Findings were correlated to animals’ weights. Mean length was 14.33 mm for both femoral arteries and femoral veins, mean diameter of arteries was 0.65 mm and diameter of veins was 0.81 mm. In our sample, rats’ body weights were only correlated with the diameter of their femoral veins.
Krüger, Andreas; Edelmann-Nusser, Jürgen
This study aims at determining the accuracy of a full body inertial measurement system in a real skiing environment in comparison with an optical video based system. Recent studies have shown the use of inertial measurement systems for the determination of kinematical parameters in alpine skiing. However, a quantitative validation of a full body inertial measurement system for the application in alpine skiing is so far not available. For the purpose of this study, a skier performed a test-run equipped with a full body inertial measurement system in combination with a DGPS. In addition, one turn of the test-run was analyzed by an optical video based system. With respect to the analyzed angles, a maximum mean difference of 4.9° was measured. No differences in the measured angles between the inertial measurement system and the combined usage with a DGPS were found. Concerning the determination of the skier's trajectory, an additional system (e.g., DGPS) must be used. As opposed to optical methods, the main advantages of the inertial measurement system are the determination of kinematical parameters without the limitation of restricted capture volume, and small time costs for the measurement preparation and data analysis.
Khalid, Md. Saifuddin; Hossan, Md. Iqbal
The integration of video conferencing systems (VCS) have increased significantly in the classrooms and administrative practices of higher education institutions. The VCSs discussed in the existing literature can be broadly categorized as desktop systems (e.g. Scopia), WebRTC or Real......-Time Communications (e.g. Google Hangout, Adobe Connect, Cisco WebEx, and appear.in), and dedicated (e.g. Polycom). There is a lack of empirical study on usability evaluation of the interactive systems in educational contexts. This study identifies usability errors and measures user satisfaction of a dedicated VCS......) analysis of 12 user responses results below average score. Poststudy system test by the vendor has identified cabling and setup error. Applying SUMI followed by qualitative methods might enrich evaluation outcomes....
Full Text Available Cross-layer design has been used in streaming video over the wireless channels to optimize the overall system performance. In this paper, we extend our previous work on joint design of source rate control and congestion control for video streaming over the wired channel, and propose a cross-layer design approach for wireless video streaming. First, we extend the QoS-aware congestion control mechanism (TFRCC proposed in our previous work to the wireless scenario, and provide a detailed discussion about how to enhance the overall performance in terms of rate smoothness and responsiveness of the transport protocol. Then, we extend our previous joint design work to the wireless scenario, and a thorough performance evaluation is conducted to investigate its performance. Simulation results show that by cross-layer design of source rate control at application layer and congestion control at transport layer, and by taking advantage of the MAC layer information, our approach can avoid the throughput degradation caused by wireless link error, and better support the QoS requirements of the application. Thus, the playback quality is significantly improved, while good performance of the transport protocol is still preserved.
Full Text Available A major learning difficulty of Japanese foreign language (JFL learners is the complex composition of two syllabaries, hiragana and katakana, and kanji characters adopted from logographic Chinese ones. As the number of Japanese language learners increases, computer-assisted Japanese language education gradually gains more attention. This study aimed to adopt a Japanese word segmentation system to help JFL learners overcome literacy problems. This study adopted MeCab, a Japanese morphological analyzer and part-of-speech (POS tagger, to segment Japanese texts into separate morphemes by adding spaces and to attach POS tags to each morpheme for beginners. The participants were asked to participate in three experimental activities involvingwatching two Japanese videos with general or segmented Japanese captions and complete the Nielsen’s Attributes of Usability (NAU survey and the After Scenario Questionnaire (ASQ to evaluate the usability of the learning activities. The results of the system evaluation showed that the videos with the segmented captions could increase the participants’ learning motivation and willingness to adopt the word segmentation system to learn Japanese.
Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko
The authors proposed a high-quality and small-capacity lecture-video-file creating system for distance e-learning system. Examining the feature of the lecturing scene, the authors ingeniously employ two kinds of image-capturing equipment having complementary characteristics : one is a digital video camera with a low resolution and a high frame rate, and the other is a digital still camera with a high resolution and a very low frame rate. By managing the two kinds of image-capturing equipment, and by integrating them with image processing, we can produce course materials with the greatly reduced file capacity : the course materials satisfy the requirements both for the temporal resolution to see the lecturer's point-indicating actions and for the high spatial resolution to read the small written letters. As a result of a comparative experiment, the e-lecture using the proposed system was confirmed to be more effective than an ordinary lecture from the viewpoint of educational effect.
Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas
This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.
Bruellmann, D D; Tjaden, H; Schwanecke, U; Barth, P
We propose an augmented reality system for the reliable detection of root canals in video sequences based on a k-nearest neighbor color classification and introduce a simple geometric criterion for teeth. The new software was implemented using C++, Qt, and the image processing library OpenCV. Teeth are detected in video images to restrict the segmentation of the root canal orifices by using a k-nearest neighbor algorithm. The location of the root canal orifices were determined using Euclidean distance-based image segmentation. A set of 126 human teeth with known and verified locations of the root canal orifices was used for evaluation. The software detects root canals orifices for automatic classification of the teeth in video images and stores location and size of the found structures. Overall 287 of 305 root canals were correctly detected. The overall sensitivity was about 94 %. Classification accuracy for molars ranged from 65.0 to 81.2 % and from 85.7 to 96.7 % for premolars. The realized software shows that observations made in anatomical studies can be exploited to automate real-time detection of root canal orifices and tooth classification with a software system. Automatic storage of location, size, and orientation of the found structures with this software can be used for future anatomical studies. Thus, statistical tables with canal locations will be derived, which can improve anatomical knowledge of the teeth to alleviate root canal detection in the future. For this purpose the software is freely available at: http://www.dental-imaging.zahnmedizin.uni-mainz.de/.
Background Violent content in video games evokes many concerns but there is little research concerning its rewarding aspects. It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes to it. We combined functional Magnetic Resonance Imaging (fMRI) with individual affect measures to address the neuronal correlates of violence in a video game. Results Thirteen male German volunteers played a first-person shooter game (Tactical Ops: Assault on Terror) during fMRI measurement. We defined success as eliminating opponents, and failure as being eliminated themselves. Affect was measured directly before and after game play using the Positive and Negative Affect Schedule (PANAS). Failure and success events evoked increased activity in visual cortex but only failure decreased activity in orbitofrontal cortex and caudate nucleus. A negative correlation between negative affect and responses to failure was evident in the right temporal pole (rTP). Conclusions The deactivation of the caudate nucleus during failure is in accordance with its role in reward-prediction error: it occurred whenever subject missed an expected reward (being eliminated rather than eliminating the opponent). We found no indication that violence events were directly rewarding for the players. We addressed subjective evaluations of affect change due to gameplay to study the reward system. Subjects reporting greater negative affect after playing the game had less rTP activity associated with failure. The rTP may therefore be involved in evaluating the failure events in a social context, to regulate the players' mood. PMID:21749711
Full Text Available Abstract Background Violent content in video games evokes many concerns but there is little research concerning its rewarding aspects. It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes to it. We combined functional Magnetic Resonance Imaging (fMRI with individual affect measures to address the neuronal correlates of violence in a video game. Results Thirteen male German volunteers played a first-person shooter game (Tactical Ops: Assault on Terror during fMRI measurement. We defined success as eliminating opponents, and failure as being eliminated themselves. Affect was measured directly before and after game play using the Positive and Negative Affect Schedule (PANAS. Failure and success events evoked increased activity in visual cortex but only failure decreased activity in orbitofrontal cortex and caudate nucleus. A negative correlation between negative affect and responses to failure was evident in the right temporal pole (rTP. Conclusions The deactivation of the caudate nucleus during failure is in accordance with its role in reward-prediction error: it occurred whenever subject missed an expected reward (being eliminated rather than eliminating the opponent. We found no indication that violence events were directly rewarding for the players. We addressed subjective evaluations of affect change due to gameplay to study the reward system. Subjects reporting greater negative affect after playing the game had less rTP activity associated with failure. The rTP may therefore be involved in evaluating the failure events in a social context, to regulate the players' mood.
Mathiak, Krystyna A; Klasen, Martin; Weber, René; Ackermann, Hermann; Shergill, Sukhwinder S; Mathiak, Klaus
Violent content in video games evokes many concerns but there is little research concerning its rewarding aspects. It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes to it. We combined functional Magnetic Resonance Imaging (fMRI) with individual affect measures to address the neuronal correlates of violence in a video game. Thirteen male German volunteers played a first-person shooter game (Tactical Ops: Assault on Terror) during fMRI measurement. We defined success as eliminating opponents, and failure as being eliminated themselves. Affect was measured directly before and after game play using the Positive and Negative Affect Schedule (PANAS). Failure and success events evoked increased activity in visual cortex but only failure decreased activity in orbitofrontal cortex and caudate nucleus. A negative correlation between negative affect and responses to failure was evident in the right temporal pole (rTP). The deactivation of the caudate nucleus during failure is in accordance with its role in reward-prediction error: it occurred whenever subject missed an expected reward (being eliminated rather than eliminating the opponent). We found no indication that violence events were directly rewarding for the players. We addressed subjective evaluations of affect change due to gameplay to study the reward system. Subjects reporting greater negative affect after playing the game had less rTP activity associated with failure. The rTP may therefore be involved in evaluating the failure events in a social context, to regulate the players' mood.
Okano, Fumio; Kawakita, Masahiro; Arai, Jun; Sasaki, Hisayuki; Yamashita, Takayuki; Sato, Masahito; Suehiro, Koya; Haino, Yasuyuki
The integral method enables observers to see 3D images like real objects. It requires extremely high resolution for both capture and display stages. We present an experimental 3D television system based on the integral method using an extremely high-resolution video system. The video system has 4,000 scanning lines using the diagonal offset method for two green channels. The number of elemental lenses in the lens array is 140 (vertical) × 182 (horizontal). The viewing zone angle is wider than 20 degrees in practice. This television system can capture 3D objects and provides full color and full parallax 3D images in real time.
Non-intrusive video imaging sensors are commonly used in traffic monitoring : and surveillance. For some applications it is necessary to transmit the video : data over communication links. However, due to increased requirements of : bitrate this mean...
Yaser Mohammad Taheri; Alireza Zolghadr–asli; Mehran Yazdi
Video watermarking is usually considered as watermarking of a set of still images. In frame-by-frame watermarking approach, each video frame is seen as a single watermarked image, so collusion attack is more critical in video watermarking. If the same or redundant watermark is used for embedding in every frame of video, the watermark can be estimated and then removed by watermark estimate remodolulation (WER) attack. Also if uncorrelated watermarks are used for every frame, these watermarks c...
Arai, Jun; Okui, Makoto; Yamashita, Takayuki; Okano, Fumio
We have developed an integral three-dimensional (3-D) television that uses a 2000-scanning-line video system that can shoot and display 3-D color moving images in real time. We had previously developed an integral 3-D television that used a high-definition television system. The new system uses ˜6 times as many elemental images [160 (horizontal)×118 (vertical) elemental images] arranged at ˜1.5 times the density to improve further the picture quality of the reconstructed image. Through comparison an image near the lens array can be reconstructed at ˜1.9 times the spatial frequency, and the viewing angle is ˜1.5 times as wide.
Takahata, Minoru; Uemori, Akira; Nakano, Hirotaka
This video-on-demand service is constructed of distributed servers, including video servers that supply real-time MPEG-1 video & audio, real-time MPEG-1 encoders, and an application server that supplies additional text information and agents for retrieval. This system has three distinctive features that enable it to provide multi viewpoint access to real-time visual information: (1) The terminal application uses an agent-oriented approach that allows the system to be easily extended. The agents are implemented using a commercial authoring tool plus additional objects that communicate with the video servers by using TCP/IP protocols. (2) The application server manages the agents, automatically processes text information and is able to handle unexpected alterations of the contents. (3) The distributed system has an economical, flexible architecture to store long video streams. The real-time MPEG-1 encoder system is based on multi channel phase-shifting processing. We also describe a practical application of this system, a prototype TV-on-demand service called TVOD. This provides access to broadcast television programs for the previous week.
Garcia, Maxime; Wyman, Megan T.; Charlton, Benjamin D.; Tecumseh Fitch, W.; Reby, David
Red deer stags ( Cervus elaphus) give two distinct types of roars during the breeding season, the "common roar" and the "harsh roar." Harsh roars are more frequent during contexts of intense competition, and characterized by a set of features that increase their perceptual salience, suggesting that they signal heightened arousal. While common roars have been shown to encode size information and mediate both male competition and female choice, to our knowledge, the specific function of harsh roars during male competition has not yet been studied. Here, we investigate the hypothesis that the specific structure of male harsh roars signals high arousal to competitors. We contrast the behavioral responses of free ranging, harem-holding stags to the playback of harsh roars from an unfamiliar competitor with their response to the playback of common roars from the same animal. We show that males react less strongly to sequences of harsh roars than to sequences of common roars, possibly because they are reluctant to escalate conflicts with highly motivated and threatening unfamiliar males in the absence of visual information. While future work should investigate the response of stags to harsh roars from familiar opponents, our observations remain consistent with the hypothesis that harsh roars may signal motivation during male competition, and illustrate how intrasexual selection can contribute to the diversification of male vocal signals.
The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing.
Full Text Available Aims: The aims of this study were (1 to investigate the influence of physical movement on near-infrared spectroscopy (NIRS data, (2 to establish a video-NIRS system which simultaneously records NIRS data and the subject’s movement, and (3 to measure the oxygenated hemoglobin (oxy-Hb concentration change (Δoxy-Hb during a word fluency (WF task. Experiment 1: In 5 healthy volunteers, we measured the oxy-Hb and deoxygenated hemoglobin (deoxy-Hb concentrations during 11 kinds of facial, head, and extremity movements. The probes were set in the bilateral frontal regions. The deoxy-Hb concentration was increased in 85% of the measurements. Experiment 2: Using a pillow on the backrest of the chair, we established the video-NIRS system with data acquisition and video capture software. One hundred and seventy-six elderly people performed the WF task. The deoxy-Hb concentration was decreased in 167 subjects (95%. Experiment 3: Using the video-NIRS system, we measured the Δoxy-Hb, and compared it with the results of the WF task. Δoxy-Hb was significantly correlated with the number of words. Conclusion: Like the blood oxygen level-dependent imaging effect in functional MRI, the deoxy-Hb concentration will decrease if the data correctly reflect the change in neural activity. The video-NIRS system might be useful to collect NIRS data by recording the waveforms and the subject’s appearance simultaneously.
Full Text Available With the rapid development of wireless networks and image acquisition technology, wireless video transmission technology has been widely applied in various communication systems. The traditional video monitoring technology is restricted by some conditions such as layout, environmental, the relatively large volume, cost, and so on. In view of this problem, this paper proposes a method that the mobile car can be equipped with wireless video monitoring system. The mobile car which has some functions such as detection, video acquisition and wireless data transmission is developed based on STC89C52 Micro Control Unit (MCU and WiFi router. Firstly, information such as image, temperature and humidity is processed by the MCU and communicated with the router, and then returned by the WiFi router to the host computer phone. Secondly, control information issued by the host computer phone is received by WiFi router and sent to the MCU, and then the MCU sends relevant instructions. Lastly, the wireless transmission of video images and the remote control of the car are realized. The results prove that the system has some features such as simple operation, high stability, fast response, low cost, strong flexibility, widely application, and so on. The system has certain practical value and popularization value.
Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.
Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.
Keramidas, Eystratios G; Maroulis, Dimitris; Iakovidis, Dimitris K
In this paper, we present a computer-aided-diagnosis (CAD) system prototype, named TND (Thyroid Nodule Detector), for the detection of nodular tissue in ultrasound (US) thyroid images and videos acquired during thyroid US examinations. The proposed system incorporates an original methodology that involves a novel algorithm for automatic definition of the boundaries of the thyroid gland, and a novel approach for the extraction of noise resilient image features effectively representing the textural and the echogenic properties of the thyroid tissue. Through extensive experimental evaluation on real thyroid US data, its accuracy in thyroid nodule detection has been estimated to exceed 95%. These results attest to the feasibility of the clinical application of TND, for the provision of a second more objective opinion to the radiologists by exploiting image evidences.
The primary purpose of the "modification and validation of an automotive data processing unit (DPU), compressed video system, and communications equipment" cooperative research and development agreement (CRADA) was to modify and validate both hardware and software, developed by Scientific Atlanta, Incorporated (S-A) for defense applications (e.g., rotary-wing airplanes), for the commercial sector surface transportation domain (i.e., automobiles and trucks). S-A also furnished a state-of-the-art compressed video digital storage and retrieval system (CVDSRS), and off-the-shelf data storage and transmission equipment to support the data acquisition system for crash avoidance research (DASCAR) project conducted by Oak Ridge National Laboratory (ORNL). In turn, S-A received access to hardware and technology related to DASCAR. DASCAR was subsequently removed completely and installation was repeated a number of times to gain an accurate idea of complete installation, operation, and removal of DASCAR. Upon satisfactory completion of the DASCAR construction and preliminary shakedown, ORNL provided NHTSA with an operational demonstration of DASCAR at their East Liberty, OH test facility. The demonstration included an on-the-road demonstration of the entire data acquisition system using NHTSA'S test track. In addition, the demonstration also consisted of a briefing, containing the following: ORNL generated a plan for validating the prototype data acquisition system with regard to: removal of DASCAR from an existing vehicle, and installation and calibration in other vehicles; reliability of the sensors and systems; data collection and transmission process (data integrity); impact on the drivability of the vehicle and obtrusiveness of the system to the driver; data analysis procedures; conspicuousness of the vehicle to other drivers; and DASCAR installation and removal training and documentation. In order to identify any operational problems not captured by the systems
Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng
As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.
Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.
Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.
Tsifouti, Anastasia; Nasralla, Moustafa M.; Razaak, Manzoor; Cope, James; Orwell, James M.; Martini, Maria G.; Sage, Kingsley
The Image Library for Intelligent Detection Systems (i-LIDS) provides benchmark surveillance datasets for analytics systems. This paper proposes a methodology to investigate the effect of compression and frame-rate reduction, and to recommend an appropriate suite of degraded datasets for public release. The library consists of six scenarios, including Sterile Zone (SZ) and Parked Vehicle (PV), which are investigated using two different compression algorithms (H.264 and JPEG) and a number of detection systems. PV has higher spatio-temporal complexity than the SZ. Compression performance is dependent on scene content hence PV will require larger bit-streams in comparison with SZ, for any given distortion rate. The study includes both industry standard algorithms (for transmission) and CCTV recorders (for storage). CCTV recorders generally use proprietary formats, which may significantly affect the visual information. Encoding standards such as H.264 and JPEG use the Discrete Cosine Transform (DCT) technique, which introduces blocking artefacts. The H.264 compression algorithm follows a hybrid predictive coding approach to achieve high compression gains, exploiting both spatial and temporal redundancy. The highly predictive approach of H.264 may introduce more artefacts resulting in a greater effect on the performance of analytics systems than JPEG. The paper describes the two main components of the proposed methodology to measure the effect of degradation on analytics performance. Firstly, the standard tests, using the `f-measure' to evaluate the performance on a range of degraded video sets. Secondly, the characterisation of the datasets, using quantification of scene features, defined using image processing techniques. This characterization permits an analysis of the points of failure introduced by the video degradation.
M. M. Blagoveshchenskaya
Full Text Available Summary. The most important operation of granular mixed fodder production is molding process. Properties of granular mixed fodder are defined during this process. They determine the process of production and final product quality. The possibility of digital video camera usage as intellectual sensor for control system in process of production is analyzed in the article. The developed parametric model of the process of bundles molding from granular fodder mass is presented in the paper. Dynamic characteristics of the molding process were determined. A mathematical model of motion of bundle of granular fodder mass after matrix holes was developed. The developed mathematical model of the automatic control system (ACS with the use of etalon video frame as the set point in the MATLAB software environment was shown. As a parameter of the bundles molding process it is proposed to use the value of the specific area defined in the mathematical treatment of the video frame. The algorithms of the programs to determine the changes in structural and mechanical properties of the feed mass in video frames images were developed. Digital video shooting of various modes of the molding machine was carried out and after the mathematical processing of video the transfer functions for use as a change of adjustable parameters of the specific area were determined. Structural and functional diagrams of the system of regulation of the food bundles molding process with the use of digital camcorders were built and analyzed. Based on the solution of the equations of fluid dynamics mathematical model of bundle motion after leaving the hole matrix was obtained. In addition to its viscosity, creep property was considered that is characteristic of the feed mass. The mathematical model ACS of the bundles molding process allowing to investigate transient processes which occur in the control system that uses a digital video camera as the smart sensor was developed in Simulink
Brown, Michael A.
With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole
Pasch, H. L.
An overview of video coding is presented. The aim is not to give a technical summary of possible coding techniques, but to address subjects related to video compression in general and to the transmission of compressed video in more detail. Bit rate reduction is in general possible by removing redundant information; removing information the eye does not use anyway; and reducing the quality of the video. The codecs which are used for reducing the bit rate, can be divided into two groups: Constant Bit rate Codecs (CBC's), which keep the bit rate constant, but vary the video quality; and Variable Bit rate Codecs (VBC's), which keep the video quality constant by varying the bit rate. VBC's can be in general reach a higher video quality than CBC's using less bandwidth, but need a transmission system that allows the bandwidth of a connection to fluctuate in time. The current and the next generation of the PSTN does not allow this; ATM might. There are several factors which influence the quality of video: the bit error rate of the transmission channel, slip rate, packet loss rate/packet insertion rate, end-to-end delay, phase shift between voice and video, and bit rate. Based on the bit rate of the coded video, the following classification of coded video can be made: High Definition Television (HDTV); Broadcast Quality Television (BQTV); video conferencing; and video telephony. The properties of these classes are given. The video conferencing and video telephony equipment available now and in the next few years can be divided into three categories: conforming to 1984 CCITT standard for video conferencing; conforming to 1988 CCITT standard; and conforming to no standard.
Donnelly, Mark P; Nugent, Chris D; Craig, David; Passmore, Peter; Mulvenna, Maurice
The current paper presents details regarding the early developments of a memory prompt solution for persons with early dementia. Using everyday technology, in the form of a cell-phone, video reminders are delivered to assist with daily activities. The proposed CPVS system will permit carers to record and schedule video reminders remotely using a standard personal computer and web cam. It is the aim of the three year project that through the frequent delivery of helpful video reminders that a 'virtual carer' will be present with the person with dementia at all times. The first prototype of the system has been fully implemented with the first field trial scheduled to take place in May 2008. Initially, only three patient carer dyads will be involved, however, the second field trial aims to involve 30 dyads in the study. Details of the first prototype and the methods of evaluation are presented herein.
Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.
Full Text Available In the paper are presented the results of strength analysis for the two types of the welded joints made according to conventional and laser technologies of high-strength steel S960QC. The hardness distributions, tensile properties and fracture toughness were determined for the weld material and heat affect zone material for both types of the welded joints. Tests results shown on advantage the laser welded joints in comparison to the convention ones. Tensile properties and fracture toughness in all areas of the laser joints have a higher level than in the conventional one. The heat affect zone of the conventional welded joints is a weakness area, where the tensile properties are lower in comparison to the base material. Verification of the tensile tests, which carried out by using the Aramis video system, confirmed this assumption. The highest level of strains was observed in HAZ material and the destruction process occurred also in HAZ of the conventional welded joint.
Beatty, Ian D
In order to facilitate analyzing video games as learning systems and instructional designs as games, we present a theoretical framework that integrates ideas from a broad range of literature. The framework describes games in terms of four layers, all sharing similar structural elements and dynamics: a micro-level game focused on immediate problem-solving and skill development, a macro-level game focused on the experience of the game world and story and identity development, and two meta-level games focused on building or modifying the game and on social interactions around it. Each layer casts gameplay as a co-construction of the game and the player, and contains three dynamical feedback loops: an exploratory learning loop, an intrinsic motivation loop, and an identity loop.
Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL
A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.
Full Text Available An approach has been proposed for automatic adaptive subtitle coloring using fuzzy logic-based algorithm. This system changes the color of the video subtitle/caption to “pleasant” color according to color harmony and the visual perception of the image background colors. In the fuzzy analyzer unit, using RGB histograms of background image, the R, G, and B values for the color of the subtitle/caption are computed using fixed fuzzy IF-THEN rules fully driven from the color harmony theories to satisfy complementary color and subtitle-background color harmony conditions. A real-time hardware structure has been proposed for implementation of the front-end processing unit as well as the fuzzy analyzer unit.
Cho, Jai Wan; Lee, Nam Ho; Choi, Young Soo
There are 760 feederpipes, which they are connected to inlet/outlet of the 380 pressure tube channels on the front of the calandria, in CANDU-type Reactor of Wolsung Nuclear Power Plant. As an ISI(In-Service Inspection) and PSI (Post- Service Inspection) requirements, maintenance activities of measuring the thickness of curvilinear part of feederpipe and inspecting the feederpipe support area within calandria are needed to ensure continued reliable operation of nuclear power plant. And untrasonic probe is used to measure the thickness of curvilinear part of feederpipe, however workers are exposed to radioactivity irradiation during the measurement period. But, it is impossible to inspect feederpipe support area thoroughlv because of narrow and confined accessibility, that is, an inspection space between the pressure tube channels is less than 100mm and pipes in feederpipe support area are congested. And also, workers involved in inspecting feederpipe support area are under the jeopardy of high-level radiation exposure. Concerns about sliding home, which make the move of feederpipe connected to pressure tube channel smooth as pressure tube expands and contracts in its axial direction, stuck to feederpipe support and some of the structural components have made necessary the development of video inspection probe system with narrow and confined accessibility to observe and inspect feederpipe support area more close. Using video inspection probe system, it is possible to inspect and repair abnormality of feederpipe support connected to pressure tube channels of the calandria more accurate and quantative than naked eye. Therefore, that will do much for ensuring safety of CANDU-type nuclear power plant.
Hwang, Euiseok; Yoon, Pilsang; Kim, Nakyoung; Kang, Byongbok; Kim, Kunyul; Park, Jooyoun; Park, Jongyong
A holographic data storage prototype fully integrated with electronics for video demonstration has been developed. It can record data in several tracks of a photopolymer disk and access them arbitrarily during retrieving process from the continuously rotating disk. An embedded controller operates all of the optomechanical components of the prototype automatically and electronic parts conduct adaptive data readout of channel data up to 55 megabit per sec. For real-time video demonstration, video stream are recorded in four concentric circular tracks of the disk. Each recording spot contains about one hundred pages with angle multiplexing. The eleven minute length video data are successfully reconstructed from the prototype.
Kastelein, R.A.; Helder-Hoek, L.; Gransier, R.; Terhune, J.M.; Jennings, N.; Jong, C.A.F. de
Acoustic mitigation devices (AMDs) are used to deter marine mammals from construction sites to prevent hearing injury by offshore pile-driving noise. In order to quantify the distance at which AMDs designed as ‘seal scarers’ are detected by seals, the 50% hearing thresholds for playbacks of their
Celik, Emine; Persson Waye, Kerstin; Møller, Henrik
The study presented here is part of a project with an overall aim to evaluate how different sound properties relate to annoyance. In order to achieve this it is necessary to study methodological aspects of importance for the experimentally evaluated annoyance. In previous studies of perception...... in perception related to annoyance, loudness and unpleasantness between monophonic recordings played back through a loudspeaker and binaural recordings played back via headphones and to evaluate whether a possible difference depends on temporal and frequency characteristics as well as spatial characteristics...... a loudspeaker and the binaural recordings were presented through both closed (circum-aural) and completely open (free of the ear) headphones. The results show that for all judgments (annoyance, loudness and unpleasantness), there was no significant main effect of recording and playback techniques; however...
Full Text Available Abstract The song of oscines provides an extensively studied model of age-dependent behaviour changes. Male and female receivers might use song characteristics to obtain information about the age of a signaller, which is often related to its quality. Whereas most of the age-dependent song changes have been studied in solo singing, the role of age in vocal interactions is less well understood. We addressed this issue in a playback study with common nightingales (Luscinia megarhynchos. Previous studies showed that male nightingales had smaller repertoires in their first year than older males and males adjusted their repertoire towards the most common songs in the breeding population. We now compared vocal interaction patterns in a playback study in 12 one year old and 12 older nightingales (cross-sectional approach. Five of these males were tested both in their first and second breeding season (longitudinal approach. Song duration and latency to respond did not differ between males of different ages in either approach. In the cross-sectional approach, one year old nightingales matched song types twice as often as did older birds. Similarly, in the longitudinal approach all except one bird reduced the number of song type matches in their second season. Individuals tended to overlap songs at higher rates in their second breeding season than in their first. The higher levels of song type matches in the first year and song overlapping by birds in their second year suggest that these are communicative strategies to establish relationships with competing males and/or choosy females.
Wang, C. P.; Bow, R. T.
A laser scanning system capable of scanning at standard video rate has been developed. The scanning mirrors, circuit design and system performance, as well as its applications to video cameras and ultra-violet microscopes, are discussed.
Full Text Available The expansion of Digital Television and the convergence between conventional broadcasting and television over IP contributed to the gradual increase of the number of available channels and on demand video content. Moreover, the dissemination of the use of mobile devices like laptops, smartphones and tablets on everyday activities resulted in a shift of the traditional television viewing paradigm from the couch to everywhere, anytime from any device. Although this new scenario enables a great improvement in viewing experiences, it also brings new challenges given the overload of information that the viewer faces. Recommendation systems stand out as a possible solution to help a watcher on the selection of the content that best fits his/her preferences. This paper describes a web based system that helps the user navigating on broadcasted and online television content by implementing recommendations based on collaborative and content based filtering. The algorithms developed estimate the similarity between items and users and predict the rating that a user would assign to a particular item (television program, movie, etc.. To enable interoperability between different systems, programs? characteristics (title, genre, actors, etc. are stored according to the TV-Anytime standard. The set of recommendations produced are presented through a Web Application that allows the user to interact with the system based on the obtained recommendations.
Schuetz, Christopher; Martin, Richard; Dillon, Thomas; Yao, Peng; Mackrides, Daniel; Harrity, Charles; Zablocki, Alicia; Shreve, Kevin; Bonnett, James; Curt, Petersen; Prather, Dennis
Passive imaging using millimeter waves (mmWs) has many advantages and applications in the defense and security markets. All terrestrial bodies emit mmW radiation and these wavelengths are able to penetrate smoke, fog/clouds/marine layers, and even clothing. One primary obstacle to imaging in this spectrum is that longer wavelengths require larger apertures to achieve the resolutions desired for many applications. Accordingly, lens-based focal plane systems and scanning systems tend to require large aperture optics, which increase the achievable size and weight of such systems to beyond what can be supported by many applications. To overcome this limitation, a distributed aperture detection scheme is used in which the effective aperture size can be increased without the associated volumetric increase in imager size. This distributed aperture system is realized through conversion of the received mmW energy into sidebands on an optical carrier. This conversion serves, in essence, to scale the mmW sparse aperture array signals onto a complementary optical array. The side bands are subsequently stripped from the optical carrier and recombined to provide a real time snapshot of the mmW signal. Using this technique, we have constructed a real-time, video-rate imager operating at 75 GHz. A distributed aperture consisting of 220 upconversion channels is used to realize 2.5k pixels with passive sensitivity. Details of the construction and operation of this imager as well as field testing results will be presented herein.
Desurmont, Xavier; Wijnhoven, Rob; Jaspers, Egbert; Caignart, Olivier; Barais, Mike; Favoreel, Wouter; Delaigle, Jean-Francois
The CANDELA project aims at realizing a system for real-time image processing in traffic and surveillance applications. The system performs segmentation, labels the extracted blobs and tracks their movements in the scene. Performance evaluation of such a system is a major challenge since no standard methods exist and the criteria for evaluation are highly subjective. This paper proposes a performance evaluation approach for video content analysis (VCA) systems and identifies the involved research areas. For these areas we give an overview of the state-of-the-art in performance evaluation and introduce a classification into different semantic levels. The proposed evaluation approach compares the results of the VCA algorithm with a ground-truth (GT) counterpart, which contains the desired results. Both the VCA results and the ground truth comprise description files that are formatted in MPEG-7. The evaluation is required to provide an objective performance measure and a mean to choose between competitive methods. In addition, it enables algorithm developers to measure the progress of their work at the different levels in the design process. From these requirements and the state-of-the-art overview we conclude that standardization is highly desirable for which many research topics still need to be addressed.
Javier I. Portillo
Full Text Available Automatic surveillance of airport surface is one of the core components of advanced surface movement, guidance, and control systems (A-SMGCS. This function is in charge of the automatic detection, identification, and tracking of all interesting targets (aircraft and relevant ground vehicles in the airport movement area. This paper presents a novel approach for object tracking based on sequences of video images. A fuzzy system has been developed to ponder update decisions both for the trajectories and shapes estimated for targets from the image regions extracted in the images. The advantages of this approach are robustness, flexibility in the design to adapt to different situations, and efficiency for operation in real time, avoiding combinatorial enumeration. Results obtained in representative ground operations show the system capabilities to solve complex scenarios and improve tracking accuracy. Finally, an automatic procedure, based on neuro-fuzzy techniques, has been applied in order to obtain a set of rules from representative examples. Validation of learned system shows the capability to learn the suitable tracker decisions.
Ballesta, S; Reymond, G; Pozzobon, M; Duhamel, J-R
To date, assessing the solitary and social behaviors of laboratory primates' colonies relies on time-consuming manual scoring methods. Here, we describe a real-time multi-camera 3D tracking system developed to measure the behavior of socially-housed primates. Their positions are identified using non-invasive color markers such as plastic collars, thus allowing to also track colored objects and to measure their usage. Compared to traditional manual ethological scoring, we show that this system can reliably evaluate solitary behaviors (foraging, solitary resting, toy usage, locomotion) as well as spatial proximity with peers, which is considered as a good proxy of their social motivation. Compared to existing video-based commercial systems currently available to measure animal activity, this system offers many possibilities (real-time data, large volume coverage, multiple animal tracking) at a lower hardware cost. Quantitative behavioral data of animal groups can now be obtained automatically over very long periods of time, thus opening new perspectives in particular for studying the neuroethology of social behavior in primates. Copyright © 2014 Elsevier B.V. All rights reserved.
Full Text Available Object Detection with small computation cost and processing time is a necessity in diverse domains such as traffic analysis security cameras video surveillance etc .With current advances in technology and decrease in prices of image sensors and video cameras the resolution of captured images is more than 1MP and has higher frame rates. This implies a considerable data size that needs to be processed in a very short period of time when real-time operations and data processing is needed. Real time video processing with high performance can be achieved with GPU technology. The aim of this study is to evaluate the influence of different image and video resolutions on the processing time number of objects detections and accuracy of the detected object. MOG2 algorithm is used for processing video input data with GPU module. Fuzzy interference system is used to evaluate the accuracy of number of detected object and to show the difference between CPU and GPU computing methods.
... From the Federal Register Online via the Government Publishing Office ] SECURITIES AND EXCHANGE COMMISSION In the Matter of Digital Video Systems, Inc., Geocom Resources, Inc., and GoldMountain Exploration... of Suspension of Trading It appears to the Securities and Exchange Commission that there is a lack of...
McNeal, Thomas, Jr.; Kearns, Landon
Video streaming can be a very useful tool for educators. It is now possible for a school?s technical specialist or classroom teacher to create a streaming server with tools that are available in many classrooms. In this article we describe how we created our video streamer using free software, older computers, and borrowed hardware. The system…
I M.O. Widyantara
Full Text Available Video surveillance system (VSS is an monitoring system based-on IP-camera. VSS implemented in live streaming and serves to observe and monitor a site remotely. Typically, IP- camera in the VSS has a management software application. However, for ad hoc applications, where the user wants to manage VSS independently, application management software has become ineffective. In the IP-camera installation spread over a large area, an administrator would be difficult to describe the location of the IP-camera. In addition, monitoring an area of IP- Camera will also become more difficult. By looking at some of the flaws in VSS, this paper has proposed a VSS application for easy monitoring of each IP Camera. Applications that have been proposed to integrate the concept of web-based geographical information system with the Google Maps API (Web-GIS. VSS applications built with smart features include maps ip-camera, live streaming of events, information on the info window and marker cluster. Test results showed that the application is able to display all the features built well
Billah, Mustain; Waheed, Sajjad; Rahman, Mohammad Motiur
Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW) features and convolutional neural network (CNN) features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM). Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%.
Partha Sindu I Gede
Full Text Available The purpose of this study was to determine the effect of the use of the instructional media based on lecture video and slide synchronization system on Statistics learning achievement of the students of PTI department . The benefit of this research is to help lecturers in the instructional process i to improve student's learning achievements that lead to better students’ learning outcomes. Students can use instructional media which is created from the lecture video and slide synchronization system to support more interactive self-learning activities. Students can conduct learning activities more efficiently and conductively because synchronized lecture video and slide can assist students in the learning process. The population of this research was all students of semester VI (six majoring in Informatics Engineering Education. The sample of the research was the students of class VI B and VI D of the academic year 2016/2017. The type of research used in this study was quasi-experiment. The research design used was post test only with non equivalent control group design. The result of this research concluded that there was a significant influence in the application of learning media based on lectures video and slide synchronization system on statistics learning result on PTI department.
Waheed, Sajjad; Rahman, Mohammad Motiur
Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW) features and convolutional neural network (CNN) features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM). Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%. PMID:28894460
Full Text Available Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW features and convolutional neural network (CNN features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM. Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%.
Kämmerer, P W; Schneider, D; Pacyna, A A; Daubländer, M
The aim of the present study was an evaluation of movement during double aspiration by different manual syringes and one computer-controlled local anesthesia delivery system (C-CLAD). With five different devices (two disposable syringes (2, 5 ml), two aspirating syringes (active, passive), one C-CLAD), simulation of double aspiration in a phantom model was conducted. Two experienced and two inexperienced test persons carried out double aspiration with the injection systems at the right and left phantom mandibles in three different inclination angles (n = 24 × 5 × 2 for each system). 3D divergences of the needle between aspiration procedures (mm) were measured with two video cameras. An average movement for the 2-ml disposal syringe of 2.85 mm (SD 1.63), for the 5 ml syringe of 2.36 mm (SD 0.86), for the active-aspirating syringe of 2.45 mm (SD 0.9), for the passive-aspirating syringe of 2.01 mm (SD 0.7), and for the C-CLAD, an average movement of 0.91 mm (SD 0.63) was seen. The movement was significantly less for the C-CLAD compared to the other systems (p movement of the needle in the soft tissue was significantly less for the C-CLAD compared to the other systems (p movement of the syringe could be seen in comparison between manual and C-CLAD systems. Launching the aspiration by a foot pedal in computer-assisted anesthesia leads to a minor movement. To solve the problem of movement during aspiration with possibly increased false-negative results, a C-CLAD seems to be favorable.
Bornoe, Nis; Barkhuus, Louise
Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....
Crump, John M; Deutsch, Thomas
Laryngeal examinations, especially stroboscopic examinations, are increasingly recorded using digital video formats on computer media, rather than using analog formats on videotape. It would be useful to share these examinations with other medical professionals in formats that would facilitate reliable and high-quality playback on a personal computer by the recipients. Unfortunately, a personal computer is not well designed for reliable presentation of artifact-free video. It is particularly important that laryngeal video play without artifacts of motion or color because these are often the characteristics of greatest clinical interest. With proper tools and procedures, and with reasonable compromises in image resolution and the duration of the examination, digital video of laryngeal examinations can be reliably exchanged. However, the tools, procedures, and formats for recording, converting to another digital format ("transcoding"), communicating, copying, and playing digital video with a personal computer are not familiar to most medical professionals. Some understanding of digital video and the tools available is required of those wanting to exchange digital video. Best results are achieved by recording to a digital format best suited for recording (such as MJPEG or DV),judiciously selecting a segment of the recording for sharing, and converting to a format suited to distribution (such as MPEG1 or MPEG2) using a medium suited to the situation (such as e-mail attachment, CD-ROM, a "clip" within a Microsoft PowerPoint presentation, or DVD-Video). If digital video is sent to a colleague, some guidance on playing files and using a PC media player is helpful.
A. A. SHAFIE
Full Text Available Traffic signal light can be optimized using vehicle flow statistics obtained by Smart Video Surveillance Software (SVSS. This research focuses on efficient traffic control system by detecting and counting the vehicle numbers at various times and locations. At present, one of the biggest problems in the main city in any country is the traffic jam during office hour and office break hour. Sometimes it can be seen that the traffic signal green light is still ON even though there is no vehicle coming. Similarly, it is also observed that long queues of vehicles are waiting even though the road is empty due to traffic signal light selection without proper investigation on vehicle flow. This can be handled by adjusting the vehicle passing time implementing by our developed SVSS. A number of experiment results of vehicle flows are discussed in this research graphically in order to test the feasibility of the developed system. Finally, adoptive background model is proposed in SVSS in order to successfully detect target objects such as motor bike, car, bus, etc.
Full Text Available The scope of this paper is a video surveillance system constituted of three principal modules, segmentation module, vehicle classification and vehicle counting. The segmentation is based on a background subtraction by using the Codebooks method. This step aims to define the regions of interest associated with vehicles. To classify vehicles in their type, our system uses the histograms of oriented gradient followed by support vector machine. Counting and tracking vehicles will be the last task to be performed. The presence of partial occlusion involves the decrease of the accuracy of vehicle segmentation and classification, which directly impacts the robustness of a video surveillance system. Therefore, a novel method to handle the partial occlusions based on vehicle classification process have developed. The results achieved have shown that the accuracy of vehicle counting and classification exceeds the accuracy measured in some existing systems.
Full Text Available Abstract HD video applications can be represented with multiple tasks consisting of tightly coupled multiple threads. Each task requires massive computation, and their communication can be categorized as asynchronous distributed small data and large streaming data transfers. In this paper, we propose a high performance programmable video platform that consists of four processing element (PE clusters. Each PE cluster runs a task in the video application with RISC cores, a hardware operating system kernel (HOSK, and task-specific accelerators. PE clusters are connected with two separate point-to-point networks: one for asynchronous distributed controls and the other for heavy streaming data transfers among the tasks. Furthermore, we developed an application mapping framework, with which parallel executable codes can be obtained from a manually developed SystemC model of the target application without knowing the detailed architecture of the video platform. To show the effectivity of the platform and its mapping framework, we also present mapping results for an H.264/AVC 720p decoder/encoder and a VC-1 720p decoder with 30 fps, assuming that the platform operates at 200 MHz.
While a tree topology is often advocated for overlay video streaming due to its scalability, it suffers from discontinuous playback under highly dynamic network environments. On the other hand, gossip protocols using random message dissemination, though robust, fail to meet the real-time constraints for streaming applications. In this master thesis, I proposed TAG, a Tree-Assisted Gossip protocol, which adopts a tree structure with time indexing to accommodate asynchronous requests, and an ef...
Moutakki Zakaria; Ouloul Imad Mohamed; Afdel Karim; Amghar Abdellah
The scope of this paper is a video surveillance system constituted of three principal modules, segmentation module, vehicle classification and vehicle counting. The segmentation is based on a background subtraction by using the Codebooks method. This step aims to define the regions of interest associated with vehicles. To classify vehicles in their type, our system uses the histograms of oriented gradient followed by support vector machine. Counting and tracking vehicles will be the last task...
The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...
This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.
Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin
Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.
Full Text Available The design of smart video surveillance systems is an active research field among the computer vision community because of their ability to perform automatic scene analysis by selecting and tracking the objects of interest. In this paper, we present the design and implementation of an FPGA-based standalone working prototype system for real-time tracking of an object of interest in live video streams for such systems. In addition to real-time tracking of the object of interest, the implemented system is also capable of providing purposive automatic camera movement (pan-tilt in the direction determined by movement of the tracked object. The complete system, including camera interface, DDR2 external memory interface controller, designed object tracking VLSI architecture, camera movement controller and display interface, has been implemented on the Xilinx ML510 (Virtex-5 FX130T FPGA Board. Our proposed, designed and implemented system robustly tracks the target object present in the scene in real time for standard PAL (720 × 576 resolution color video and automatically controls camera movement in the direction determined by the movement of the tracked object.
In 2009, the Texas Transportation Institute produced for the Texas Department of Transportation a document : called Video over IP Design Guidebook. This report summarizes an implementation of that project in the : form of a workshop. The workshop was...
Gregorio, Massimo De
In this paper we present an intelligent active video surveillance system currently adopted in two different application domains: railway tunnels and outdoor storage areas. The system takes advantages of the integration of Artificial Neural Networks (ANN) and symbolic Artificial Intelligence (AI). This hybrid system is formed by virtual neural sensors (implemented as WiSARD-like systems) and BDI agents. The coupling of virtual neural sensors with symbolic reasoning for interpreting their outputs, makes this approach both very light from a computational and hardware point of view, and rather robust in performances. The system works on different scenarios and in difficult light conditions.
Ilgner, Justus; Park, Jonas Jae-Hyun; Labbé, Daniel; Westhofen, Martin
Introduction: While there is an increasing demand for minimally invasive operative techniques in Ear, Nose and Throat surgery, these operations are difficult to learn for junior doctors and demanding to supervise for experienced surgeons. The motivation for this study was to integrate high-definition (HD) stereoscopic video monitoring in microscopic surgery in order to facilitate teaching interaction between senior and junior surgeon. Material and methods: We attached a 1280x1024 HD stereo camera (TrueVisionSystems TM Inc., Santa Barbara, CA, USA) to an operating microscope (Zeiss ProMagis, Zeiss Co., Oberkochen, Germany), whose images were processed online by a PC workstation consisting of a dual IntelÂ® XeonÂ® CPU (Intel Co., Santa Clara, CA). The live image was displayed by two LCD projectors @ 1280x768 pixels on a 1,25m rear-projection screen by polarized filters. While the junior surgeon performed the surgical procedure based on the displayed stereoscopic image, all other participants (senior surgeon, nurse and medical students) shared the same stereoscopic image from the screen. Results: With the basic setup being performed only once on the day before surgery, fine adjustments required about 10 minutes extra during the operation schedule, which fitted into the time interval between patients and thus did not prolong operation times. As all relevant features of the operative field were demonstrated on one large screen, four major effects were obtained: A) Stereoscopy facilitated orientation for the junior surgeon as well as for medical students. B) The stereoscopic image served as an unequivocal guide for the senior surgeon to demonstrate the next surgical steps to the junior colleague. C) The theatre nurse shared the same image, anticipating the next instruments which were needed. D) Medical students instantly share the information given by all staff and the image, thus avoiding the need for an extra teaching session. Conclusion: High definition
Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.
Xie, Ruobing; Li, Li; Jin, Weiqi; Guo, Hong
It is prevalent for the low-light night-vision helmet to equip the binocular viewer with image intensifiers. Such equipment can not only acquire night vision ability, but also obtain the sense of stereo vision to achieve better perception and understanding of the visual field. However, since the image intensifier is for direct-observation, it is difficult to apply the modern image processing technology. As a result, developing digital video technology in night vision is of great significance. In this paper, we design a low-light night-vision helmet with digital imaging device. It consists of three parts: a set of two low-illumination CMOS cameras, a binocular OLED micro display and an image processing PCB. Stereopsis is achieved through the binocular OLED micro display. We choose Speed-Up Robust Feature (SURF) algorithm for image registration. Based on the image matching information and the cameras' calibration parameters, disparity can be calculated in real-time. We then elaborately derive the constraints of binocular stereo display. The sense of stereo vision can be obtained by dynamically adjusting the content of the binocular OLED micro display. There is sufficient space for function extensions in our system. The performance of this low-light night-vision helmet can be further enhanced in combination with The HDR technology and image fusion technology, etc.
Xu, Huihui; Jiang, Mingyan
Two-dimensional to three-dimensional (3-D) conversion in 3-D video applications has attracted great attention as it can alleviate the problem of stereoscopic content shortage. Depth estimation is an essential part of this conversion since the depth accuracy directly affects the quality of a stereoscopic image. In order to generate a perceptually reasonable depth map, a comprehensive depth estimation algorithm that considers the scenario type is presented. Based on the human visual system mechanism, which is sensitive to a change in the scenario, this study classifies the type of scenario into four classes according to the relationship between the movements of the camera and the object, and then leverages different strategies on the basis of the scenario type. The proposed strategies efficiently extract the depth information from different scenarios. In addition, the depth generation method for a scenario in which there is no motion, neither of the object nor the camera, is also suitable for the single image. Qualitative and quantitative evaluation results demonstrate that the proposed depth estimation algorithm is very effective for generating stereoscopic content and providing a realistic visual experience.
Mitani, Kohji; Sugawara, Masayuki; Shimamoto, Hiroshi; Yamashita, Takayuki; Okano, Fumio
An experimental ultrahigh-definition color video camera system with 7680(H) × 4320(V) pixels has been developed using four 8-million-pixel CCDs. The 8-million-pixel CCD with a progressive scanning rate of 60 frames per second has 4046(H) × 2048(V) effective imaging pixels, each of which is 8.4 micron2. We applied the four-imager pickup method to increase the camera"s resolution. This involves attaching four CCDs to a special color-separation prism. Two CCDs are used for the green image, and the other two are used for red and blue. The spatial image sampling pattern of these CCDs to the optical image is equivalent to one with 32 million pixels in the Bayer pattern color filter. The prototype camera attains a limiting resolution of more than 2700 TV lines both horizontally and vertically, which is higher than that of an 8-million-CCD. The sensitivity of the camera is 2000 lux, F 2.8 at approx. 50 dB of dark-noise level on the HDTV format. Its other specifications are a dynamic range of 200%, a power consumption of about 600 W and a weight, with lens, of 76 kg.
Clynick, Tony J.
A prototype laser video projector which uses electronic, optical, and mechanical means to project a television picture is described. With the primary goal of commercial viability, the price/performance ratio of the chosen means is critical. The fundamental requirement has been to achieve high brightness, high definition images of at least movie-theater size, at a cost comparable with other existing large-screen video projection technologies, while having the opportunity of developing and exploiting the unique properties of the laser projected image, such as its infinite depth-of-field. Two argon lasers are used in combination with a dye laser to achieve a range of colors which, despite not being identical to those of a CRT, prove to be subjectively acceptable. Acousto-optic modulation in combination with a rotary polygon scanner, digital video line stores, novel specialized electro-optics, and a galvanometric frame scanner form the basis of the projection technique achieving a 30 MHz video bandwidth, high- definition scan rates (1125/60 and 1250/50), high contrast ratio, and good optical efficiency. Auditorium projection of HDTV pictures wider than 20 meters are possible. Applications including 360 degree(s) projection and 3-D video provide further scope for exploitation of the HD laser video projector.
Chung, Krystal Shu Yi; Lee, Eleena Shi Lynn; Tan, Jia Qi; Teo, Dylan Jin Hao; Lee, Chris Ban Loong; Ee, Sharifah Rose; Sim, Sam Kim Yang; Chee, Chew Sim
This study investigated the effects of Playback Theatre on older adults' cognitive function and well-being, specifically in the Singapore context. Eighteen healthy older adults, older than 50 years of age, participated in the study. Due to practical limitations, a single-group pre-post study design was adopted. Participants completed the outcome measures before and after the training program. There were six weekly sessions in total (about 1.5 hours, once weekly). Participants experienced a significant improvement in their emotional well-being after training. However, there were no significant changes in participants' cognitive function or health-related quality of life. Our results suggest that Playback Theatre as a community program has potential to improve the mental and emotional well-being of older people. © 2018 AJA Inc.
Full Text Available We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.
Monica Adams, head librarian at Robinson Secondary in Fairfax country, Virginia, states that librarians should have the technical knowledge to support projects related to digital video editing. The process of digital video editing and the cables, storage issues and the computer system with software is described.
Salas, Ramiro; Steele, Kenya; Lin, Amy; Loe, Claire; Gauna, Leslie; Jafar-Nejad, Paymaan
Playback Theatre (PT) is an improvisational form of theatre in which a group of actors “play back” real life stories told by audience members. In PT, a conductor elicits moments, feelings and stories from audience members, and conducts mini-interviews with those who volunteer a moment of their lives to be re-enacted or “played” for the audience. A musician plays music according to the theme of each story, and 4-5 actors listen to the interview and perform the story that has just been told. PT has been used in a large number of settings as a tool to share stories in an artistic manner. Despite its similarities to psychodrama, PT does not claim to be a form of therapy. We offered two PT performances to first year medical students at Baylor College of Medicine in Houston, Texas, to bring the students a safe and fun environment, conducive to sharing feelings and moments related to being a medical student. Through the moments and stories shared by students, we conclude that there is an enormous need in this population for opportunities to communicate the many emotions associated with medical school and with healthcare-related personal experiences, such as anxiety, pride, or anger. PT proved a powerful tool to help students communicate. PMID:24369762
Charlton, Benjamin D.; Ellis, William A. H.; McKinnon, Allan J.; Brumm, Jacqui; Nilsson, Karen; Fitch, W. Tecumseh
The ability to signal individual identity using vocal signals and distinguish between conspecifics based on vocal cues is important in several mammal species. Furthermore, it can be important for receivers to differentiate between callers in reproductive contexts. In this study, we used acoustic analyses to determine whether male koala bellows are individually distinctive and to investigate the relative importance of different acoustic features for coding individuality. We then used a habituation-discrimination paradigm to investigate whether koalas discriminate between the bellow vocalisations of different male callers. Our results show that male koala bellows are highly individualized, and indicate that cues related to vocal tract filtering contribute the most to vocal identity. In addition, we found that male and female koalas habituated to the bellows of a specific male showed a significant dishabituation when they were presented with bellows from a novel male. The significant reduction in behavioural response to a final rehabituation playback shows this was not a chance rebound in response levels. Our findings indicate that male koala bellows are highly individually distinctive and that the identity of male callers is functionally relevant to male and female koalas during the breeding season. We go on to discuss the biological relevance of signalling identity in this species' sexual communication and the potential practical implications of our findings for acoustic monitoring of male population levels. PMID:21633499
Benjamin D Charlton
Full Text Available The ability to signal individual identity using vocal signals and distinguish between conspecifics based on vocal cues is important in several mammal species. Furthermore, it can be important for receivers to differentiate between callers in reproductive contexts. In this study, we used acoustic analyses to determine whether male koala bellows are individually distinctive and to investigate the relative importance of different acoustic features for coding individuality. We then used a habituation-discrimination paradigm to investigate whether koalas discriminate between the bellow vocalisations of different male callers. Our results show that male koala bellows are highly individualized, and indicate that cues related to vocal tract filtering contribute the most to vocal identity. In addition, we found that male and female koalas habituated to the bellows of a specific male showed a significant dishabituation when they were presented with bellows from a novel male. The significant reduction in behavioural response to a final rehabituation playback shows this was not a chance rebound in response levels. Our findings indicate that male koala bellows are highly individually distinctive and that the identity of male callers is functionally relevant to male and female koalas during the breeding season. We go on to discuss the biological relevance of signalling identity in this species' sexual communication and the potential practical implications of our findings for acoustic monitoring of male population levels.
Salas, Ramiro; Steele, Kenya; Lin, Amy; Loe, Claire; Gauna, Leslie; Jafar-Nejad, Paymaan
Playback Theatre (PT) is an improvisational form of theatre in which a group of actors "play back" real life stories told by audience members. In PT, a conductor elicits moments, feelings and stories from audience members, and conducts mini-interviews with those who volunteer a moment of their lives to be re-enacted or "played" for the audience. A musician plays music according to the theme of each story, and 4-5 actors listen to the interview and perform the story that has just been told. PT has been used in a large number of settings as a tool to share stories in an artistic manner. Despite its similarities to psychodrama, PT does not claim to be a form of therapy. We offered two PT performances to first year medical students at Baylor College of Medicine in Houston, Texas, to bring the students a safe and fun environment, conducive to sharing feelings and moments related to being a medical student. Through the moments and stories shared by students, we conclude that there is an enormous need in this population for opportunities to communicate the many emotions associated with medical school and with healthcare-related personal experiences, such as anxiety, pride, or anger. PT proved a powerful tool to help students communicate.
Salas, Ramiro; Steele, Kenya; Lin, Amy; Loe, Claire; Gauna, Leslie; Jafar-Nejad, Paymaan
Playback Theatre (PT) is an improvisational form of theatre in which a group of actors "play back" real life stories told by audience members. In PT, a conductor elicits moments, feelings and stories from audience members, and conducts mini-interviews with those who volunteer a moment of their lives to be re-enacted or "played" for the audience. A musician plays music according to the theme of each story, and 4-5 actors listen to the interview and perform the story that has just been told. PT has been used in a large number of settings as a tool to share stories in an artistic manner. Despite its similarities to psychodrama, PT does not claim to be a form of therapy. We offered two PT performances to first year medical students at Baylor College of Medicine in Houston, Texas, to bring the students a safe and fun environment, conducive to sharing feelings and moments related to being a medical student. Through the moments and stories shared by students, we conclude that there is an enormous need in this population for opportunities to communicate the many emotions associated with medical school and with healthcare-related personal experiences, such as anxiety, pride, or anger. PT proved a powerful tool to help students communicate.
Full Text Available Playback Theatre (PT is an improvisational form of theatre in which a group of actors “play back” real life stories told by audience members. In PT, a conductor elicits moments, feelings and stories from audience members, and conducts mini-interviews with those who volunteer a moment of their lives to be re-enacted or “played” for the audience. A musician plays music according to the theme of each story, and 4-5 actors listen to the interview and perform the story that has just been told. PT has been used in a large number of settings as a tool to share stories in an artistic manner. Despite its similarities to psychodrama, PT does not claim to be a form of therapy.We offered two PT performances to first year medical students at Baylor College of Medicine in Houston, Texas, to bring the students a safe and fun environment, conducive to sharing feelings and moments related to being a medical student. Through the moments and stories shared by students, we conclude that there is an enormous need in this population for opportunities to communicate the many emotions associated with medical school and with healthcare-related personal experiences, such as anxiety, pride, or anger. PT proved a powerful tool to help students communicate.
Crockford, Catherine; Wittig, Roman M; Zuberbühler, Klaus
A vital step in the evolution of language is likely to have been when signalers explicitly intended to direct recipients' attention to external objects with the use of referential signals. Although animal signals can direct the attention of others to external events, such as in monkey predator alarm calls, there is little evidence that this is the result of an intention to inform the recipient. Two recent studies, however, indicate that the production of chimpanzee quiet alarm calls, given to snakes, complies with some standard behavioral markers of intentional signaling, such as gaze alternation. But it is currently unknown whether the calls alone direct receivers' attention to the threat. To address this, we carried out a playback experiment with free-ranging chimpanzees in Budongo Forest, Uganda, using a within-subjects design. From a hidden speaker, we broadcast either quiet alarm 'hoos' ('alert hoos') or acoustically distinguishable hoos produced while resting ('rest hoos') and found a significant increase in search behavior after 'alert' compared with 'rest' hoos, with subjects monitoring either the call provider or the area near the call provider. In sum, chimpanzee 'alert hoos' represent a plausible case of an intentionally produced animal vocalization (other studies) that refers recipients to signalers and/or to an external event (this study).
Smigelsky, Melissa A; Neimeyer, Robert A; Murphy, Virginia; Brown, DeAndre; Brown, Vinessa; Berryhill, Anthony; Knowlton, Joy
Police-community relations have catapulted onto the national stage after several high-profile instances of alleged police brutality. Blame and hostility can be barriers to positive police-community relations. Playback is a form of audience-inspired, improvisational theater designed to promote connectivity and empathy through storytelling. We tested the feasibility and acceptability of an arts-based intervention, bringing together police officers and formerly incarcerated individuals from the same community in Memphis, Tennessee. We collected pre/post quantitative data from five police officers and five ex-offenders who took part in the intervention, as well as qualitative data to provide contextual information. The project was feasible and acceptable to participants. Participants showed gains in their ability to make meaning of stressful life experiences. The officers and ex-offenders showed parallel gains in their increased positive attitudes toward the other group. This study demonstrates that creating contexts of safety and understanding necessary to address relational problems is both feasible and acceptable to law enforcement and ex-offenders.
Wang, Shuangbao; Kelly, William
In this paper, we present a novel system, inVideo, for video data analytics, and its use in transforming linear videos into interactive learning objects. InVideo is able to analyze video content automatically without the need for initial viewing by a human. Using a highly efficient video indexing engine we developed, the system is able to analyze…
Cheung, Gene; Ortega, Antonio; Cheung, Ngai-Man
While much of multiview video coding focuses on the rate-distortion performance of compressing all frames of all views for storage or non-interactive video delivery over networks, we address the problem of designing a frame structure to enable interactive multiview streaming, where clients can interactively switch views during video playback. Thus, as a client is playing back successive frames (in time) for a given view, it can send a request to the server to switch to a different view while continuing uninterrupted temporal playback. Noting that standard tools for random access (i.e., I-frame insertion) can be bandwidth-inefficient for this application, we propose a redundant representation of I-, P-, and "merge" frames, where each original picture can be encoded into multiple versions, appropriately trading off expected transmission rate with storage, to facilitate view switching. We first present ad hoc frame structures with good performance when the view-switching probabilities are either very large or very small. We then present optimization algorithms that generate more general frame structures with better overall performance for the general case. We show in our experiments that we can generate redundant frame structures offering a range of tradeoff points between transmission and storage, e.g., outperforming simple I-frame insertion structures by up to 45% in terms of bandwidth efficiency at twice the storage cost.
Wilson, Rhoda M.
Human Systems Integration Report Low back pain (LBP) and work-related musculoskeletal disorders (WMSDs) can lead to employee absenteeism, sick leave, and permanent disability. Over the years, much work has been done in examining physical exposure to ergonomic risks. The current research presents a new approach for assessing WMSD risk during lifting related tasks that combines traditional observational methods with video recording methods. One particular application area, the Future Com...
Full Text Available Dual-mode wireless video transmission has two major problems. Firstly, one is time delay difference bringing about asynchronous reception decoding frame error phenomenon; secondly, dual-mode network bandwidth inconformity causes scheduling problem. In order to solve above two problems, a kind of TD-SCDMA/CDMA20001x dual-mode wireless video transmission design method is proposed. For the solution of decoding frame error phenomenon, the design puts forward adding frame identification and packet preprocessing at the sending and synchronizing combination at the receiving end. For the solution of scheduling problem, the wireless communication channel cooperative work and video data transmission scheduling management algorithm is proposed in the design.
Valli, D.; Ganesan, K.
Chaos based cryptosystems are an efficient method to deal with improved speed and highly secured multimedia encryption because of its elegant features, such as randomness, mixing, ergodicity, sensitivity to initial conditions and control parameters. In this paper, two chaos based cryptosystems are proposed: one is the higher-dimensional 12D chaotic map and the other is based on the Ikeda delay differential equation (DDE) suitable for designing a real-time secure symmetric video encryption scheme. These encryption schemes employ a substitution box (S-box) to diffuse the relationship between pixels of plain video and cipher video along with the diffusion of current input pixel with the previous cipher pixel, called cipher block chaining (CBC). The proposed method enhances the robustness against statistical, differential and chosen/known plain text attacks. Detailed analysis is carried out in this paper to demonstrate the security and uniqueness of the proposed scheme.
In the past decade, the display format from (HD High Definition) through Full HD(1920X1080) to UHD(4kX2k), mainly guides display industry to two directions: one is liquid crystal display(LCD) from 10 inch to 100 inch and more, and the other is projector. Although LCD has been popularly used in market; however, the investment for production such kind displays cost more money expenditure, and less consideration of environmental pollution and protection. The Projection system may be considered, due to more viewing access, flexible in location, energy saving and environmental protection issues. The topic is to design and fabricate a short throw factor liquid crystal on silicon (LCoS) projection system for cinema. It provides a projection lens system, including a tele-centric lens fitted for emitted LCoS to collimate light to enlarge the field angle. Then, the optical path is guided by a symmetric lens. Light of LCoS may pass through the lens, hit on and reflect through an aspherical mirror, to form a less distortion image on blank wall or screen for home cinema. The throw ratio is less than 0.33.
Naito, Hiromichi; Guyette, Francis X; Martin-Gill, Christian; Callaway, Clifton W
Video laryngoscopy (VL) is a technical adjunct to facilitate endotracheal intubation (ETI). VL also provides objective data for training and quality improvement, allowing evaluation of the technique and airway conditions during ETI. Previous studies of factors associated with ETI success or failure are limited by insufficient nomenclature, individual recall bias and self-report. We tested whether the covariates in prehospital VL recorded data were associated with ETI success. We also measured association between time and clinical variables. Retrospective review was conducted in a non-physician staffed helicopter emergency medical service system. ETI was typically performed using sedation and neuromuscular-blockade under protocolized orders. We obtained process and outcome variables from digitally recorded VL data. Patient characteristics data were also obtained from the emergency medical service record and linked to the VL recorded data. The primary outcome was to identify VL covariates associated with successful ETI attempts. Among 304 VL recorded ETI attempts in 268 patients, ETI succeeded for 244 attempts and failed for 60 attempts (first-pass success rate, 82% and overall success rate, 94%). Laryngoscope blade tip usually moved from a shallow position in the oropharynx to the vallecula. In the multivariable logistic regression analysis, attempt time (p = 0.02; odds ratio [OR] 0.99), Cormack-Lehane view (p Cormack-Lehane view, and longer ETI attempt time were negatively associated with successful ETI attempts. Initially shallow blade tip position may associate with longer ETI time. VL is useful for measuring and describing multiple factors of ETI and can provide valuable data.
Bales, John W.
The F64 frame grabber is a high performance video image acquisition and processing board utilizing the TMS320C40 and TMS34020 processors. The hardware is designed for the ISA 16 bit bus and supports multiple digital or analog cameras. It has an acquisition rate of 40 million pixels per second, with a variable sampling frequency of 510 kHz to MO MHz. The board has a 4MB frame buffer memory expandable to 32 MB, and has a simultaneous acquisition and processing capability. It supports both VGA and RGB displays, and accepts all analog and digital video input standards.
Johnson, Don; Johnson, Mike
The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.
Chen, Liang; Zhou, Yipeng; Chiu, Dah Ming
Bandwidth consumption is a significant concern for online video service providers. Practical video streaming systems usually use some form of HTTP streaming (progressive download) to let users download the video at a faster rate than the video bitrate. Since users may quit before viewing the complete video, however, much of the downloaded video will be "wasted". To the extent that users' departure behavior can be predicted, we develop smart streaming that can be used to improve user QoE with ...
Darabi, K; G. Ghinea
In this paper an expert-based model for generation of personalized video summaries is suggested. The video frames are initially scored and annotated by multiple video experts. Thereafter, the scores for the video segments that have been assigned the higher priorities by end users will be upgraded. Considering the required summary length, the highest scored video frames will be inserted into a personalized final summary. For evaluation purposes, the video summaries generated by our system have...
U.S. Geological Survey, Department of the Interior — These data are the trackline from the seafloor photograph and video survey conducted September 2004 using the mini-SeaBOSS sampling system on the R/V Rafael in...
Ridgway, James; Stannett, Mike
Although techniques for separate image and audio steganography are widely known, relatively little has been described concerning the hiding of information within video streams ("video steganography"). In this paper we review the current state of the art in this field, and describe the key issues we have encountered in developing a practical video steganography system. A supporting video is also available online at http://www.youtube.com/watch?v=YhnlHmZolRM
Full Text Available Vision-based monitoring systems using visible spectrum (regular video cameras can complement or substitute conventional sensors and provide rich positional and classification data. Although new camera technologies, including thermal video sensors, may improve the performance of digital video-based sensors, their performance under various conditions has rarely been evaluated at multimodal facilities. The purpose of this research is to integrate existing computer vision methods for automated data collection and evaluate the detection, classification, and speed measurement performance of thermal video sensors under varying lighting and temperature conditions. Thermal and regular video data was collected simultaneously under different conditions across multiple sites. Although the regular video sensor narrowly outperformed the thermal sensor during daytime, the performance of the thermal sensor is significantly better for low visibility and shadow conditions, particularly for pedestrians and cyclists. Retraining the algorithm on thermal data yielded an improvement in the global accuracy of 48%. Thermal speed measurements were consistently more accurate than for the regular video at daytime and nighttime. Thermal video is insensitive to lighting interference and pavement temperature, solves issues associated with visible light cameras for traffic data collection, and offers other benefits such as privacy, insensitivity to glare, storage space, and lower processing requirements.
Squire, Kurt D.
Recently, attention has been paid to computer and video games as a medium for learning. This article provides a way of conceptualizing them as possibility spaces for learning. It provides an overview of two research programs: (1) an after-school program using commercial games to develop deep expertise in game play and game creation, and (2) an…
Jahn, H.; Oertel, D.
The present analysis deals with the influence of the videochannel harmonic response characteristic of a push-broom scanner on the spatial transmission function and the signal-to-noise ratio. It is shown that when detector noise is prevalent, the video frequency bandwidth influences both the transmission function and the SNR, but influences only the transmission function when the photonoise prevails.
Full Text Available Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.
Parisot, Christophe; Antonini, Marc; Barlaud, Michel
Wavelet coding has been shown to achieve better compression than DCT coding and moreover allows scalability. 2D DWT can be easily extended to 3D and thus applied to video coding. However, 3D subband coding of video suffers from two drawbacks. The first is the amount of memory required for coding large 3D blocks; the second is the lack of temporal quality due to the sequence temporal splitting. In fact, 3D block-based video coders produce jerks. They appear at blocks temporal borders during video playback. In this paper, we propose a new temporal scan-based wavelet transform method for video coding combining the advantages of wavelet coding (performance, scalability) with acceptable reduced memory requirements, no additional CPU complexity, and avoiding jerks. We also propose an efficient quality allocation procedure to ensure a constant quality over time.
Sendra, Sandra; Lloret, Jaime; Jimenez, Jose Miguel; Rodrigues, Joel J P C
Video surveillance is needed to control many activities performed in underwater environments. The use of wired media can be a problem since the material specially designed for underwater environments is very expensive. In order to transmit the images and videos wirelessly under water, three main technologies can be used: acoustic waves, which do not provide high bandwidth, optical signals, although the effect of light dispersion in water severely penalizes the transmitted signals and therefore, despite offering high transfer rates, the maximum distance is very small, and electromagnetic (EM) waves, which can provide enough bandwidth for video delivery. In the cases where the distance between transmitter and receiver is short, the use of EM waves would be an interesting option since they provide high enough data transfer rates to transmit videos with high resolution. This paper presents a practical study of the behavior of EM waves at 2.4 GHz in freshwater underwater environments. First, we discuss the minimum requirements of a network to allow video delivery. From these results, we measure the maximum distance between nodes and the round trip time (RTT) value depending on several parameters such as data transfer rate, signal modulations, working frequency, and water temperature. The results are statistically analyzed to determine their relation. Finally, the EM waves' behavior is modeled by a set of equations. The results show that there are some combinations of working frequency, modulation, transfer rate and temperature that offer better results than others. Our work shows that short communication distances with high data transfer rates is feasible.
Full Text Available Video surveillance is needed to control many activities performed in underwater environments. The use of wired media can be a problem since the material specially designed for underwater environments is very expensive. In order to transmit the images and videos wirelessly under water, three main technologies can be used: acoustic waves, which do not provide high bandwidth, optical signals, although the effect of light dispersion in water severely penalizes the transmitted signals and therefore, despite offering high transfer rates, the maximum distance is very small, and electromagnetic (EM waves, which can provide enough bandwidth for video delivery. In the cases where the distance between transmitter and receiver is short, the use of EM waves would be an interesting option since they provide high enough data transfer rates to transmit videos with high resolution. This paper presents a practical study of the behavior of EM waves at 2.4 GHz in freshwater underwater environments. First, we discuss the minimum requirements of a network to allow video delivery. From these results, we measure the maximum distance between nodes and the round trip time (RTT value depending on several parameters such as data transfer rate, signal modulations, working frequency, and water temperature. The results are statistically analyzed to determine their relation. Finally, the EM waves’ behavior is modeled by a set of equations. The results show that there are some combinations of working frequency, modulation, transfer rate and temperature that offer better results than others. Our work shows that short communication distances with high data transfer rates is feasible.
In this paper we present an automatic enhanced video display and navigation capability for networked streaming video and networked video playlists. Our proposed method uses Synchronized Multimedia Integration Language (SMIL) as presentation language and Real Time Streaming Protocol (RTSP) as network remote control protocol to automatically generate a "enhanced video strip" display for easy navigation. We propose and describe two approaches - a smart client approach and a smart server approach. We also describe a prototype system implementation of our proposed approach.
Racca, Roberto G.; Scotten, Larry N.
This article describes a method that allows the digital recording of sequences of three black and white images at rates of several thousand frames per second using a system consisting of an ordinary CCD camcorder, three flash units with color filters, a PC-based frame grabber board and some additional electronics. The maximum framing rate is determined by the duration of the flashtube emission, and for common photographic flash units lasting about 20 microsecond(s) it can exceed 10,000 frames per second in actual use. The subject under study is strobe- illuminated using a red, a green and a blue flash unit controlled by a special sequencer, and the three images are captured by a color CCD camera on a single video field. Color is used as the distinguishing parameter that allows the overlaid exposures to be resolved. The video output for that particular field will contain three individual scenes, one for each primary color component, which potentially can be resolved with no crosstalk between them. The output is electronically decoded into the primary color channels, frame grabbed and stored into digital memory, yielding three time-resolved images of the subject. A synchronization pulse provided by the flash sequencer triggers the frame grabbing so that the correct video field is acquired. A scheme involving the use of videotape as intermediate storage allows the frame grabbing to be performed using a monochrome video digitizer. Ideally each flash- illuminated scene would be confined to one color channel, but in practice various factors, both optical and electronic, affect color separation. Correction equations have been derived that counteract these effects in the digitized images and minimize 'ghosting' between frames. Once the appropriate coefficients have been established through a calibration procedure that needs to be performed only once for a given configuration of the equipment, the correction process is carried out transparently in software every time a
This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...
Van Reeth, Frank; Raymaekers, Chris; TREKELS, Peter; VERKOYEN, Stefan; FLERACKERS, Eddy
Conventional educational material is even more complemented with computer-based multimedia material. In order to make this material available to teachers and students in a structured manner, we developed a multimedia database and accompanying tools for creating, manipulating and formatting the teaching content. Recently, we expanded this educational multimedia database with the functionality to support streamed video as well. Given the vast amounts of data that needs to be stored and transmi...
Tosteberg, Joakim; Axelsson, Thomas
A team of developers from Epsilon AB has developed a lightweight remote controlledquadcopter named Crazyflie. The team wants to allow a pilot to navigate thequadcopter using video from an on-board camera as the only guidance. The masterthesis evaluates the feasibility of mounting a camera module on the quadcopter andstreaming images from the camera to a computer, using the existing quadcopterradio link. Using theoretical calculations and measurements, a set of requirementsthat must be fulfill...
In conventional electronic video stabilization, the stabilized frame is obtained by cropping the input frame to cancel camera shake. While a small cropping size results in strong stabilization, it does not provide us satisfactory results from the viewpoint of image quality, because it narrows the angle of view. By fusing several frames, we can effectively expand the area of input frames, and achieve strong stabilization even with a large cropping size. Several methods for doing so have been s...
... without the need for expensive, specialized DSP programming and testing tools. ARIA allows developers to exploit the speed and low cost of modern CPUs, provides cross-platform portability, and simplifies the modification and sharing of codes...
Gerhardt, H Carl
The two main spectral components of the advertisement calls of two species of North American gray treefrogs (Hyla chrysoscelis and H. versicolor) overlap broadly in frequency, and the frequency of each component matches the sensitivity of one of the two different auditory inner ear organs. The calls of the two species differ in the shape and repetition rate (pulse rate) of sound pulses within trills. Standard synthetic calls with one of these spectral peaks and the pulse rate typical of conspecific calls were tested against synthetic alternatives that had the same spectral peak but a different pulse rate. The results were generalized over a wide range of playback levels. Selectivity based on differences in pulse rate depended on which spectral peak was used in some tests, and greater pulse-rate selectivity was usually observed when the low-frequency rather than the high-frequency peak was used. This effect was more pronounced and occurred over a wider range of playback levels in H. versicolor than in H. chrysoscelis when the pulse rate of the alternative was higher than that of the standard call. In tests at high playback levels with an alternative of 15 pulses s(-1), however, females of H. versicolor showed greater selectivity for the standard call when the high-frequency rather than the low-frequency spectral peak was used. This last result may reflect the different ways in which females of the two species assess trains of pulses, and the broad implications for understanding the underlying auditory mechanisms are discussed.
Matessi, Giuliano; Dabelsteen, Torben; Pilastro, A.
Populations of Reed Buntings Emberiza schoeniclus in the western Palearctic are classified in two major subspecies groups according to morphology: northern migratory schoeniclus and Mediterranean resident intermedia. Songs of the two groups differ mainly in complexity and syllable structure......, with intermedia songs being more complex. We explored the possibilities of song as a subspecies isolating mechanism by testing if male schoeniclus Reed Buntings reacted differently to field playbacks of songs from their own subspecies group, from the foreign subspecies group and from a control species...
Lee, Hyun Jeong; Oh, Se An
Respiratory-gated radiation therapy (RGRT) has been used to minimize the dose to normal tissue in lung-cancer radiotherapy. The present research aims to improve the regularity of respiration in RGRT using a video coached respiration guiding system. In the study, 16 patients with lung cancer were evaluated. The respiration signals of the patients were measured by a real-time position management (RPM) Respiratory Gating System (Varian, USA) and the patients were trained using the video coached respiration guiding system. The patients performed free breathing and guided breathing, and the respiratory cycles were acquired for ~5 min. Then, Microsoft Excel 2010 software was used to calculate the mean and standard deviation for each phase. The standard deviation was computed in order to analyze the improvement in the respiratory regularity with respect to the period and displacement. The standard deviation of the guided breathing decreased to 65.14% in the inhale peak and 71.04% in the exhale peak compared with the...
Chen, Ming; He, Jing; Deng, Rui; Chen, Qinghui; Zhang, Jinlong; Chen, Lin
To further investigate the feasibility of the digital signal processing (DSP) algorithms (e.g., symbol timing synchronization, channel estimation and equalization, and sampling clock frequency offset (SCFO) estimation and compensation) for real-time optical orthogonal frequency-division multiplexing (OFDM) system, 2.97-Gb/s real-time high-definition video signal parallel transmission is experimentally demonstrated in OFDM-based short-reach intensity-modulated direct-detection (IM-DD) systems. The experimental results show that, in the presence of ∼12 ppm SCFO between transmitter and receiver, the adaptively modulated OFDM signal transmission over 20 km standard single-mode fiber with an error bit rate less than 1 × 10-9 can be achieved by using only DSP-based small SCFO estimation and compensation method without utilizing forward error correction technique. To the best of our knowledge, for the first time, we successfully demonstrate that the video signal at a bit rate in excess of 1-Gb/s transmission in a simple real-valued inverse fast Fourier transform and fast Fourier transform based IM-DD optical OFDM system employing a directly modulated laser.
Agnisarman, Sruthy; Narasimha, Shraddhaa; Chalil Madathil, Kapil; Welch, Brandon; Brinda, Fnu; Ashok, Aparna; McElligott, James
Telemedicine is the use of technology to provide and support health care when distance separates the clinical service and the patient. Home-based telemedicine systems involve the use of such technology for medical support and care connecting the patient from the comfort of their homes with the clinician. In order for such a system to be used extensively, it is necessary to understand not only the issues faced by the patients in using them but also the clinician. The aim of this study was to conduct a heuristic evaluation of 4 telemedicine software platforms-Doxy.me, Polycom, Vidyo, and VSee-to assess possible problems and limitations that could affect the usability of the system from the clinician's perspective. It was found that 5 experts individually evaluated all four systems using Nielsen's list of heuristics, classifying the issues based on a severity rating scale. A total of 46 unique problems were identified by the experts. The heuristics most frequently violated were visibility of system status and Error prevention amounting to 24% (11/46 issues) each. Esthetic and minimalist design was second contributing to 13% (6/46 issues) of the total errors. Heuristic evaluation coupled with a severity rating scale was found to be an effective method for identifying problems with the systems. Prioritization of these problems based on the rating provides a good starting point for resolving the issues affecting these platforms. There is a need for better transparency and a more streamlined approach for how physicians use telemedicine systems. Visibility of the system status and speaking the users' language are keys for achieving this.
Lee, Sung-Ho; Jang, Bumjoon; Kim, Dong Hee; Park, Chang Hyun; Bae, Gyuri; Park, Seung Woo; Park, Seung-Han
Unlike those of other ordinary laser scanning microscopies in the past, nonlinear optical laser scanning microscopy (SHG, THG microscopy) applied ultrafast laser technology which has high peak powers with relatively inexpensive, low-average-power. It short pulse nature reduces the ionization damage in organic molecules. And it enables us to take bright label-free images. In this study, we measured cell division of zebrafish egg with ultrafast video images using multimodal nonlinear optical microscope. The result shows in-vivo cell division label-free imaging with sub-cellular resolution.
Lee, A R; Yang, S; Shin, Y H; Kim, J A; Chung, I S; Cho, H S; Lee, J J
We evaluated the effects of three airway manipulation manoeuvres: (a) conventional (single-handed chin lift); (b) backward, upward and right-sided pressure (BURP) manoeuvre; and (c) modified jaw thrust manoeuvre (two-handed aided by an assistant) on laryngeal view and intubation time using the Clarus Video System in 215 patients undergoing general anaesthesia with orotracheal intubation. In the first part of this study, the laryngeal view was recorded as a modified Cormack-Lehane grade with each manoeuvre. In the second part, intubation was performed using the assigned airway manipulation. The primary outcome was the time to intubation, and the secondary outcomes were the modified Cormack-Lehane grade, the number of attempts and the overall success rate. There were significant differences in modified Cormack-Lehane grade between the three airway manipulations (p < 0.0001). Post-hoc analysis indicated that the modified jaw thrust improved the laryngeal view compared with the conventional (p < 0.0001) and the BURP manoeuvres (p < 0.0001). The BURP worsened the laryngeal view compared with the conventional manoeuvre (p = 0.0132). The time to intubation in the modified jaw thrust group was shorter than with the conventional manoeuvre (p = 0.0004) and the BURP group (p < 0.0001). We conclude that the modified jaw thrust is the most effective manoeuvre at improving the laryngeal view and shortening intubation time with the Clarus Video System. © 2013 The Association of Anaesthetists of Great Britain and Ireland.
Mol, J.J.D.; Pouwelse, J.A.; Meulpolder, M.; Epema, D.H.J.; Sips, H.J.
Centralised solutions for Video-on-Demand (VoD) services, which stream pre-recorded video content to multiple clients who start watching at the moments of their own choosing, are not scalable because of the high bandwidth requirements of the central video servers. Peer-to-peer (P2P) techniques which
Dahlin, Christine R; Wright, Timothy F
The question of why animals participate in duets is an intriguing one, as many such displays appear to be more costly to produce than individual signals. Mated pairs of yellow-naped amazons, Amazona auropalliata, give duets on their nesting territories. We investigated the function of those duets with a playback experiment. We tested two hypotheses for the function of those duets: the joint territory defense hypothesis and the mate-guarding hypothesis, by presenting territorial pairs with three types of playback treatments: duets, male solos, and female solos. The joint territory defense hypothesis suggests that individuals engage in duets because they appear more threatening than solos and are thus more effective for the establishment, maintenance and/or defense of territories. It predicts that pairs will be coordinated in their response (pair members approach speakers and vocalize together) and will either respond more strongly (more calls and/or more movement) to duet treatments than to solo treatments, or respond equally to all treatments. Alternatively, the mate-guarding hypothesis suggests that individuals participate in duets because they allow them to acoustically guard their mate, and predicts uncoordinated responses by pairs, with weak responses to duet treatments and stronger responses by individuals to solos produced by the same sex. Yellow-naped amazon pairs responded to all treatments in an equivalently aggressive and coordinated manner by rapidly approaching speakers and vocalizing more. These responses generally support the joint territory defense hypothesis and further suggest that all intruders are viewed as a threat by resident pairs.
Renato Bobsin Machado
Full Text Available OBJECTIVE: Develop a prototype using computer resources to optimize the management process of clinical information and video colonoscopy exams. MATERIALS AND METHODS: Through meetings with medical and computer experts, the following requirements were defined: management of information about medical professionals, patients and exams; video and image captured by video colonoscopes during the exam, and the availability of these videos and images on the Web for further analysis. The technologies used were Java, Flex, JBoss, Red5, JBoss SEAM, MySQL and Flamingo. RESULTS AND DISCUSSION: The prototype contributed to the area of colonocospy by providing resources to maintain the patients' history, tests and images from video colonoscopies. The web-based application allows greater flexibility to physicians and specialists. The resources for remote analysis of data and tests can help doctors and patients in the examination and diagnosis. CONCLUSION: The implemented prototype has contributed to improve colonoscopy-related processes. Future activities include the prototype deployment in the Service of Coloproctology and the utilization of this model to allow real-time monitoring of these exams and knowledge extraction from such structured database using artificial intelligence.OBJETIVO: Desenvolver um protótipo por meio de recursos computacionais para a otimização de processos de gerenciamento de informações clínicas e de exames de videocolonoscopia. MATERIAIS E MÉTODOS: Por meio de reuniões com especialistas médicos e computacionais, definiram-se os seguintes requisitos: gestão de informações sobre profissionais médicos, pacientes e exames complementares; aquisição dos vídeos e captura de imagens a partir do videocolonoscópio durante a realização desse exame, e a disponibilidade por meio da Web para análise posterior dessas imagens. As tecnologias aplicadas foram: Java, Flex, JBOSS, Red5, JBOSS SEAM, MySQL e Flamingo. RESULTADOS E
Hassner, Tal; Wolf, Lior; Lerner, Anat; Leitner, Yael
Knowing how adults with ADHD interact with prerecorded video lessons at home may provide a novel means of early screening and long-term monitoring for ADHD. Viewing patterns of 484 students with known ADHD were compared with 484 age, gender, and academically matched controls chosen from 8,699 non-ADHD students. Transcripts generated by their video playback software were analyzed using t tests and regression analysis. ADHD students displayed significant tendencies (p ≤ .05) to watch videos with more pauses and more reviews of previously watched parts. Other parameters showed similar tendencies. Regression analysis indicated that attentional deficits remained constant for age and gender but varied for learning experience. There were measurable and significant differences between the video-viewing habits of the ADHD and non-ADHD students. This provides a new perspective on how adults cope with attention deficits and suggests a novel means of early screening for ADHD. © 2011 SAGE Publications.
Nortvig, Anne Mette; Sørensen, Birgitte Holm
This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...
Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark.
Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul
Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.
Full Text Available As the public education system in Northern Ontario continues to take a downward spiral, a plethora of secondary school students are being placed in an alternative educational environment. Juxtaposing the two educational settings reveals very similar methods and characteristics of educating our youth as opposed to using a truly alternative approach to education. This video reviews the relationship between public education and alternative education in a remote Northern Ontario setting. It is my belief that the traditional methods of teaching are not appropriate in educating at risk students in alternative schools. Paper and pencil worksheets do not motivate these students to learn and succeed. Alternative education should emphasize experiential learning, a just in time curriculum based on every unique individual and the students true passion for everyday life. Cameron Culbert was born on February 3rd, 1977 in North Bay, Ontario. His teenage years were split between attending public school and his willed curriculum on the ski hill. Culbert spent 10 years (1996-2002 & 2006-2010 competing for Canada as an alpine ski racer. His passion for teaching and coaching began as an athlete and has now transferred into the classroom and the community. As a graduate of Nipissing University (BA, BEd, MEd. Camerons research interests are alternative education, physical education and technology in the classroom. Currently Cameron is an active educator and coach in Northern Ontario.
He, Ye; Fei, Kevin; Fernandez, Gustavo A.; Delp, Edward J.
Due to the increasing user expectation on watching experience, moving web high quality video streaming content from the small screen in mobile devices to the larger TV screen has become popular. It is crucial to develop video quality metrics to measure the quality change for various devices or network conditions. In this paper, we propose an automated scoring system to quantify user satisfaction. We compare the quality of local videos with the videos transmitted to a TV. Four video quality metrics, namely Image Quality, Rendering Quality, Freeze Time Ratio and Rate of Freeze Events are used to measure video quality change during web content mirroring. To measure image quality and rendering quality, we compare the matched frames between the source video and the destination video using barcode tools. Freeze time ratio and rate of freeze events are measured after extracting video timestamps. Several user studies are conducted to evaluate the impact of each objective video quality metric on the subjective user watching experience.
Ebe, Kazuyu, E-mail: firstname.lastname@example.org; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji [Joetsu General Hospital, 616 Daido-Fukuda, Joetsu-shi, Niigata 943-8507 (Japan); Sugimoto, Satoru [Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo 113-8421 (Japan); Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi [Graduate School of Medical and Dental Sciences, Niigata University, Niigata 951-8510 (Japan); Court, Laurence [The University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States)
Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors
Afiouni, Einar Nour; Øvrelid, Leif Julian
This project aims to examine the possibilities of using game theoretic concepts and multi-agent systems in modern video games with real time demands. We have implemented a multi-issue negotiation system for the strategic video game Civilization IV, evaluating different negotiation techniques with a focus on the use of opponent modeling to improve negotiation results.
Saur, Drew D.; Tan, Yap-Peng; Kulkarni, Sanjeev R.; Ramadge, Peter J.
Automated analysis and annotation of video sequences are important for digital video libraries, content-based video browsing and data mining projects. A successful video annotation system should provide users with useful video content summary in a reasonable processing time. Given the wide variety of video genres available today, automatically extracting meaningful video content for annotation still remains hard by using current available techniques. However, a wide range video has inherent structure such that some prior knowledge about the video content can be exploited to improve our understanding of the high-level video semantic content. In this paper, we develop tools and techniques for analyzing structured video by using the low-level information available directly from MPEG compressed video. Being able to work directly in the video compressed domain can greatly reduce the processing time and enhance storage efficiency. As a testbed, we have developed a basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times. We expect our approach can also be extended to structured video in other domains.
Yaron, Avi; Bar-Zohar, Meir; Horesh, Nadav
Sophisticated surgeries require the integration of several medical imaging modalities, like MRI and CT, which are three-dimensional. Many efforts are invested in providing the surgeon with this information in an intuitive & easy to use manner. A notable development, made by Visionsense, enables the surgeon to visualize the scene in 3D using a miniature stereoscopic camera. It also provides real-time 3D measurements that allow registration of navigation systems as well as 3D imaging modalities, overlaying these images on the stereoscopic video image in real-time. The real-time MIS 'see through tissue' fusion solutions enable the development of new MIS procedures in various surgical segments, such as spine, abdomen, cardio-thoracic and brain. This paper describes 3D surface reconstruction and registration methods using Visionsense camera, as a step toward fully automated multi-modality 3D registration.
Full Text Available Biometrics verification can be efficiently used for intrusion detection and intruder identification in video surveillance systems. Biometrics techniques can be largely divided into traditional and the so-called soft biometrics. Whereas traditional biometrics deals with physical characteristics such as face features, eye iris, and fingerprints, soft biometrics is concerned with such information as gender, national origin, and height. Traditional biometrics is versatile and highly accurate. But it is very difficult to get traditional biometric data from a distance and without personal cooperation. Soft biometrics, although featuring less accuracy, can be used much more freely though. Recently, many researchers have been made on human identification using soft biometrics data collected from a distance. In this paper, we use both traditional and soft biometrics for human identification and propose a framework for solving such problems as lighting, occlusion, and shadowing.
Full Text Available The task of human hand trajectory tracking and gesture trajectory recognition based on synchronized color and depth video is considered. Toward this end, in the facet of hand tracking, a joint observation model with the hand cues of skin saliency, motion and depth is integrated into particle filter in order to move particles to local peak in the likelihood. The proposed hand tracking method, namely, salient skin, motion, and depth based particle filter (SSMD-PF, is capable of improving the tracking accuracy considerably, in the context of the signer performing the gesture toward the camera device and in front of moving, cluttered backgrounds. In the facet of gesture recognition, a shape-order context descriptor on the basis of shape context is introduced, which can describe the gesture in spatiotemporal domain. The efficient shape-order context descriptor can reveal the shape relationship and embed gesture sequence order information into descriptor. Moreover, the shape-order context leads to a robust score for gesture invariant. Our approach is complemented with experimental results on the settings of the challenging hand-signed digits datasets and American sign language dataset, which corroborate the performance of the novel techniques.
Peters, Suzanne M; Pinter, Ilona J; Pothuizen, Helen H J; de Heer, Raymond C; van der Harst, Johanneke E; Spruijt, Berry M
In the past, studies in behavioral neuroscience and drug development have relied on simple and quick readout parameters of animal behavior to assess treatment efficacy or to understand underlying brain mechanisms. The predominant use of classical behavioral tests has been repeatedly criticized during the last decades because of their poor reproducibility, poor translational value and the limited explanatory power in functional terms. We present a new method to monitor social behavior of rats using automated video tracking. The velocity of moving and the distance between two rats were plotted in frequency distributions. In addition, behavior was manually annotated and related to the automatically obtained parameters for a validated interpretation. Inter-individual distance in combination with velocity of movement provided specific behavioral classes, such as moving with high velocity when "in contact" or "in proximity". Human observations showed that these classes coincide with following (chasing) behavior. In addition, when animals are "in contact", but at low velocity, behaviors such as allogrooming and social investigation were observed. Also, low dose treatment with morphine and short isolation increased the time animals spent in contact or in proximity at high velocity. Current methods that involve the investigation of social rat behavior are mostly limited to short and relatively simple manual observations. A new and automated method for analyzing social behavior in a social interaction test is presented here and shows to be sensitive to drug treatment and housing conditions known to influence social behavior in rats. Copyright © 2016 Elsevier B.V. All rights reserved.
Magic Lantern and Honeywell FM and T worked together to develop lower-cost, visible light solid-state laser sources to use in laser projector products. Work included a new family of video displays that use lasers as light sources. The displays would project electronic images up to 15 meters across and provide better resolution and clarity than movie film, up to five times the resolution of the best available computer monitors, up to 20 times the resolution of television, and up to six times the resolution of HDTV displays. The products that could be developed as a result of this CRADA could benefit the economy in many ways, such as: (1) Direct economic impact in the local manufacture and marketing of the units. (2) Direct economic impact in exports and foreign distribution. (3) Influencing the development of other elements of display technology that take advantage of the signals that these elements allow. (4) Increased productivity for engineers, FAA controllers, medical practitioners, and military operatives.
Hench, David L.
The H.264 video compression standard, aka MPEG 4 Part 10 aka Advanced Video Coding (AVC) allows new flexibility in the use of video in the battlefield. This standard necessitates encoder chips to effectively utilize the increased capabilities. Such chips are designed to cover the full range of the standard with designers of individual products given the capability of selecting the parameters that differentiate a broadcast system from a video conferencing system. The SmartCapture commercial product and the Universal Video Stick (UVS) military versions are about the size of a thumb drive with analog video input and USB (Universal Serial Bus) output and allow the user to select the parameters of imaging to the. Thereby, allowing the user to select video bandwidth (and video quality) using four dimensions of quality, on the fly, without stopping video transmission. The four dimensions are: 1) spatial, change from 720 pixel x 480 pixel to 320 pixel x 360 pixel to 160 pixel x 180 pixel, 2) temporal, change from 30 frames/ sec to 5 frames/sec, 3) transform quality with a 5 to 1 range, 4) and Group of Pictures (GOP) that affects noise immunity. The host processor simply wraps the H.264 network abstraction layer packets into the appropriate network packets. We also discuss the recently adopted scalable amendment to H.264 that will allow limit RAVC at any point in the communication chain by throwing away preselected packets.
Monini, Simonetta; Marinozzi, Franco; Atturo, Francesca; Bini, Fabiano; Marchelletta, Silvia; Barbara, Maurizio
To propose a new objective video-recording procedure to assess and monitor over time the severity of facial nerve palsy. No objective methods for facial palsy (FP) assessment are universally accepted. The face of subjects presenting with different degrees of facial nerve deficit, as measured by the House-Brackmann (HB) grading system, was videotaped after positioning, at specific points, 10 gray circular markers made of a retroreflective material. Video-recording included the resting position and six ordered facial movements. Editing and data elaboration was performed using a software instructed to assess marker distances. From the differences of the marker distances between the two sides was then extracted a score. The higher the FP degree, the higher the score registered during each movement. The statistical significance differed during the various movements between the different FP degrees, being uniform when closing the eyes gently; whereas when wrinkling the nose, there was no difference between the HB grade III and IV groups and, when smiling, no difference was evidenced between the HB grade IV and V groups.The global range index, which represents the overall degree of FP, was between 6.2 and 7.9 in the normal subjects (HB grade I); between 10.6 and 18.91 in HB grade II; between 22.19 and 33.06 in HB grade III; between 38.61 and 49.75 in HB grade IV; and between 50.97 and 66.88 in HB grade V. The proposed objective methodology could provide numerical data that correspond to the different degrees of FP, as assessed by the subjective HB grading system. These data can in addition be used singularly to score selected areas of the paralyzed face when recovery occurs with a different timing in the different face regions.
Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet er...... der dog opstået nye muligheder og udfordringer i forhold til at formidle og distribuere forskningsresultater til forskellige målgrupper via video. Samtidig er klassiske metodologiske problematikker som forskerens positionering i forhold til det undersøgte stadig aktuelle. Både klassiske og nye...... problemstillinger diskuteres i kapitlet, som rammesætter diskussionen ud fra forskellige positioneringsmuligheder: formidler, historiefortæller, eller dialogist. Disse positioner relaterer sig til genrer inden for ’akademisk video’. Afslutningsvis præsenteres en metodisk værktøjskasse med redskaber til planlægning...
Kerofsky, Louis; Jagannath, Abhijith; Reznik, Yuriy
We describe the design of a video streaming system using adaptation to viewing conditions to reduce the bitrate needed for delivery of video content. A visual model is used to determine sufficient resolution needed under various viewing conditions. Sensors on a mobile device estimate properties of the viewing conditions, particularly the distance to the viewer. We leverage the framework of existing adaptive bitrate streaming systems such as HLS, Smooth Streaming or MPEG-DASH. The client rate selection logic is modified to include a sufficient resolution computed using the visual model and the estimated viewing conditions. Our experiments demonstrate significant bitrate savings compare to conventional streaming methods which do not exploit viewing conditions.
This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...
Carnaz, Letícia; Moriguchi, Cristiane S; de Oliveira, Ana Beatriz; Santiago, Paulo R P; Caurin, Glauco A P; Hansson, Gert-Åke; Coury, Helenice J C Gil
This study compared neck range of movement recording using three different methods goniometers (EGM), inclinometers (INC) and a three-dimensional video analysis system (IMG) in simultaneous and synchronized data collection. Twelve females performed neck flexion-extension, lateral flexion, rotation and circumduction. The differences between EGM, INC, and IMG were calculated sample by sample. For flexion-extension movement, IMG underestimated the amplitude by 13%; moreover, EGM showed a crosstalk of about 20% for lateral flexion and rotation axes. In lateral flexion movement, all systems showed similar amplitude and the inter-system differences were moderate (4-7%). For rotation movement, EGM showed a high crosstalk (13%) for flexion-extension axis. During the circumduction movement, IMG underestimated the amplitude of flexion-extension movements by about 11%, and the inter-system differences were high (about 17%) except for INC-IMG regarding lateral flexion (7%) and EGM-INC regarding flexion-extension (10%). For application in workplace, INC presents good results compared to IMG and EGM though INC cannot record rotation. EGM should be improved in order to reduce its crosstalk errors and allow recording of the full neck range of movement. Due to non-optimal positioning of the cameras for recording flexion-extension, IMG underestimated the amplitude of these movements. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Full Text Available Various studies have discussed the pedagogical potential of video game play in the classroom but resistance to such texts remains high. The study presented here discusses the case study of one young boy who, having failed to learn to read in the public school system was able to learn in a private Sudbury model school where video games were not only allowed but considered important learning tools. Findings suggest that the incorporation of such new texts in today’s public schools have the potential to motivate and enhance the learning of children.
Full Text Available Various studies have discussed the pedagogical potential of video game play in the classroom but resistance to such texts remains high. The study presented here discusses the case study of one young boy who, having failed to learn to read in the public school system was able to learn in a private Sudbury model school where video games were not only allowed but considered important learning tools. Findings suggest that the incorporation of such new texts in today’s public schools have the potential to motivate and enhance the learning of children.
Emerging video applications are being developed where multiple views of a scene are captured. Two central issues in the deployment of future multiview video (MVV) systems are compression efficiency and interactive video experience, which makes it necessary to develop advanced technologies on multiview video coding (MVC) and interactive multiview video streaming (IMVS). The former aims at efficient compression of all MVV data in a ratedistortion (RD) optimal manner by exploiting both temporal ...
Cihak, David F.; Smith, Catherine C.; Cornett, Ashlee; Coleman, Mari Beth
The use of video modeling (VM) procedures in conjunction with the picture exchange communication system (PECS) to increase independent communicative initiations in preschool-age students was evaluated in this study. The four participants were 3-year-old children with limited communication skills prior to the intervention. Two of the students had…
Seeber, Bernhard U; Kerber, Stefan; Hafter, Ervin R
The article reports the experience gained from two implementations of the "Simulated Open-Field Environment" (SOFE), a setup that allows sounds to be played at calibrated levels over a wide frequency range from multiple loudspeakers in an anechoic chamber. Playing sounds from loudspeakers in the free-field has the advantage that each participant listens with their own ears, and individual characteristics of the ears are captured in the sound they hear. This makes an easy and accurate comparison between various listeners with and without hearing devices possible. The SOFE uses custom calibration software to assure individual equalization of each loudspeaker. Room simulation software creates the spatio-temporal reflection pattern of sound sources in rooms which is played via the SOFE loudspeakers. The sound playback system is complemented by a video projection facility which can be used to collect or give feedback or to study auditory-visual interaction. The article discusses acoustical and technical requirements for accurate sound playback against the specific needs in hearing research. An introduction to software concepts is given which allow easy, high-level control of the setup and thus fast experimental development, turning the SOFE into a "Swiss army knife" tool for auditory, spatial hearing and audio-visual research. Crown Copyright 2009. Published by Elsevier B.V. All rights reserved.
Takeda, Naohito; Takeuchi, Isao; Haruna, Mitsumasa
In order to develop an e-learning system that promotes self-learning, lectures and basic operations in laboratory practice of chemistry were recorded and edited on DVD media, consisting of 8 streaming videos as learning materials. Twenty-six students wanted to watch the DVD, and answered the following questions after they had watched it: "Do you think the video would serve to encourage you to study independently in the laboratory practice?" Almost all students (95%) approved of its usefulness, and more than 60% of them watched the videos repeatedly in order to acquire deeper knowledge and skill of the experimental operations. More than 60% answered that the demonstration-experiment should be continued in the laboratory practice, in spite of distribution of the DVD media.
This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real W...
Jensen, Karsten; Juhl, Jens
There is an increasing demand for assessing foot mal positions and an interest in monitoring the effect of treatment. In the last decades several different motion capture systems has been used. This abstract describes a new low cost motion capture system.......There is an increasing demand for assessing foot mal positions and an interest in monitoring the effect of treatment. In the last decades several different motion capture systems has been used. This abstract describes a new low cost motion capture system....
Gonzalez, J.; Pomares, H.; Damas, M.; Garcia-Sanchez,P.; Rodriguez-Alvarez, M.; Palomares, J. M.
As embedded systems are becoming prevalent in everyday life, many universities are incorporating embedded systems-related courses in their undergraduate curricula. However, it is not easy to motivate students in such courses since they conceive of embedded systems as bizarre computing elements, different from the personal computers with which they…
Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.
Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.
Mahmood Rajpoot, Qasim; Jensen, Christian D.
Pervasive usage of video surveillance is rapidly increasing in developed countries. Continuous security threats to public safety demand use of such systems. Contemporary video surveillance systems offer advanced functionalities which threaten the privacy of those recorded in the video....... There is a need to balance the usage of video surveillance against its negative impact on privacy. This chapter aims to highlight the privacy issues in video surveillance and provides a model to help identify the privacy requirements in a video surveillance system. The authors make a step in the direction...... of investigating the existing legal infrastructure for ensuring privacy in video surveillance and suggest guidelines in order to help those who want to deploy video surveillance while least compromising the privacy of people and complying with legal infrastructure....
Arndt, Timothy; Guercio, Angela; Maresca, Paolo
Multimode database systems are becoming increasingly important as organizations accumulate more multimedia data. There are few solutions that allow the information to be stored and managed efficiently. Relational systems provide features that organization rely on for their alphanumeric data. Unfortunately, these system lack facilities necessary for the handling of multimedia data - things like media integration, composition and presentation, multimedia interface and interactivity, imprecise query support, and multimedia indexing. One solution suggested for storage of multimedia data is the use of object-oriented database management system as a layer on top of the relational system. The layer adds required multimedia functionality to the capabilities provided by the relational system. A prototype solution implemented in Java uses the facilities offered by JDBC to provide connection to a large number of databases. Java Media Framework is used to present the video and audio data. Among the facilities provided are image/video/audio display/playback and extension of SQL to include multimedia operators and functions.
Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.
Vision is only a part of a larger system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. This mechanism provides a reliable recognition if the target is occluded or cannot be recognized. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Logic of visual scenes can be captured in Network-Symbolic models and used for disambiguation of visual information. Network-Symbolic Transformations derive abstract structures, which allow for invariant recognition of an object as exemplar of a class. Active vision helps build consistent, unambiguous models. Such Image/Video Understanding Systems will be able reliably recognizing targets in real-world conditions.
Madachy, Raymond J.
Naval Postgraduate School Graduate School of Engineering & Applied Sciences, Total Ownership Cost Modeling presented by Raymond J. Madachy, Associate Professor of Systems Engineering at the Naval Postgraduate School. Total Ownership Cost (TOC) is the sum cost of system acquisition, development, and operations including direct and indirect costs. In the DoD, cost modeling is needed to enable tradespace analysis of affordability with other system ilities. Parametric cost models will be overv...
Video monitoring of visible atmospheric emissions: from a manual device to a new fully automatic detection and classification device; Video surveillance des rejets atmospheriques d'un site siderurgique: d'un systeme manuel a la detection automatique
Bardet, I.; Ryckelynck, F.; Desmonts, T. [Sollac, 59 - Dunkerque (France)
Complete text of publication follows: the context of strong local sensitivity to dust emissions from an integrated steel plant justifies the monitoring of the emissions of abnormally coloured smokes from this plant. In a first step, the watch is done 'visually' by screening and counting the puff emissions through a set of seven cameras and video recorders. The development of a new device making automatic picture analysis allowed to render the inspection automatic. The new system detects and counts the incidents and sends an alarm to the process operator. This way for automatic detection can be extended, after some tests, to other uses in the environmental field. (authors)
Full Text Available Video streaming over the Internet has gained significant popularity during the last years, and the academy and industry have realized a great research effort in this direction. In this scenario, scalable video coding (SVC has emerged as an important video standard to provide more functionality to video transmission and storage applications. This paper proposes and evaluates two strategies based on scalable video coding for P2P video streaming services. In the first strategy, SVC is used to offer differentiated quality video to peers with heterogeneous capacities. The second strategy uses SVC to reach a homogeneous video quality between different videos from different sources. The obtained results show that our proposed strategies enable a system to improve its performance and introduce benefits such as differentiated quality of video for clients with heterogeneous capacities and variable network conditions.
Full Text Available Background and objective In recent years, Da Vinci robot system applied in the treatment of intrathoracic surgery mediastinal diseases become more mature. The aim of this study is to summarize the clinical data about mediastinal lesions of General Hospital of Shenyang Military Region in the past 4 years, then to analyze the treatment effect and promising applications of da Vinci robot system in the surgical treatment of mediastinal lesions. Methods 203 cases of mediastinal lesions were collected from General Hospital of Shenyang Military Region between 2010 and 2013. These patients were divided into two groups da Vinci and video-assisted thoracoscopic surgery (VATS according to the selection of the treatments. The time in surgery, intraoperative blood loss, postoperative drainage amount within three days after surgery, the period of bearing drainage tubes, hospital stays and hospitalization expense were then compared. Results All patients were successfully operated, the postoperative recovery is good and there is no perioperative death. The different of the time in surgery between two groups is Robots group 82 (20-320 min and thoracoscopic group 89 (35-360 min (P>0.05. The intraoperative blood loss between two groups is robot group 10 (1-100 mL and thoracoscopic group 50 (3-1,500 mL. The postoperative drainage amount within three days after surgery between two groups is robot group 215 (0-2,220 mL and thoracoscopic group 350 (50-1,810 mL. The period of bearing drainage tubes after surgery between two groups is robot group 3 (0-10 d and thoracoscopic group: 5 (1-18 d. The difference of hospital stays between two groups is robot group 7 (2-15 d and thoracoscopic group 9 (2-50 d. The hospitalization expense between two groups is robot group (18,983.6±4,461.2 RMB and thoracoscopic group (9,351.9±2,076.3 RMB (All P<0.001. Conclusion The da Vinci robot system is safe and efficient in the treatment of mediastinal lesions compared with video
van Houten, Ynze; Schuurman, Jan Gerrit; Verhagen, Pleunes Willem; Enser, Peter; Kompatsiaris, Yiannis; O’Connor, Noel E.; Smeaton, Alan F.; Smeulders, Arnold W.M.
With information systems, the real design problem is not increased access to information, but greater efficiency in finding useful information. In our approach to video content browsing, we try to match the browsing environment with human information processing structures by applying ideas from
Beheshti, Mobina; Taspolat, Ata; Kaya, Omer Sami; Sapanca, Hamza Fatih
Nowadays, video plays a significant role in education in terms of its integration into traditional classes, the principal delivery system of information in classes particularly in online courses as well as serving as a foundation of many blended classes. Hence, education is adopting a modern approach of instruction with the target of moving away…
De Laat, PB
According to David Teece, only strong and integrated firms can successfully innovate in a systemic fashion. Looser coalitions consisting of joint ventures, alliances, or virtual partners will not be able to create a systemic innovation, let alone to set standards for it, or to control its further
Maryland State Dept. of Education, Baltimore. School Facilities Branch.
Telecommunications infrastructure has the dual challenges of maintaining quality while accommodating change, issues that have long been met through a series of implementation standards. This document is designed to ensure that telecommunications systems within the Maryland public school system are also capable of meeting both challenges and…
... by computer simulations, with/without supplementary gyro and GPS. How various system parameters impact the achievable precision of panoramic system in 3-D terrain feature localization and UAV motion estimation is determined for the A=0.5-2 KM...
Full Text Available Structural health monitoring (SHM has become a viable tool to provide owners of structures and mechanical systems with quantitative and objective data for maintenance and repair. Traditionally, discrete contact sensors such as strain gages or accelerometers have been used for SHM. However, distributed remote sensors could be advantageous since they don’t require cabling and can cover an area rather than a limited number of discrete points. Along this line we propose a novel monitoring methodology based on video analysis. By employing commercially available digital cameras combined with efficient signal processing methods we can measure and compute the fundamental frequency of vibration of structural systems. The basic concept is that small changes in the intensity value of a monitored pixel with fixed coordinates caused by the vibration of structures can be captured by employing techniques such as the Fast Fourier Transform (FFT. In this paper we introduce the basic concept and mathematical theory of this proposed so-called virtual visual sensor (VVS, we present a set of initial laboratory experiments to demonstrate the accuracy of this approach, and provide a practical in-service monitoring example of an in-service bridge. Finally, we discuss further work to improve the current methodology.
Michael B. McCamy
Full Text Available Our eyes are in continuous motion. Even when we attempt to fix our gaze, we produce so called “fixational eye movements”, which include microsaccades, drift, and ocular microtremor (OMT. Microsaccades, the largest and fastest type of fixational eye movement, shift the retinal image from several dozen to several hundred photoreceptors and have equivalent physical characteristics to saccades, only on a smaller scale (Martinez-Conde, Otero-Millan & Macknik, 2013. OMT occurs simultaneously with drift and is the smallest of the fixational eye movements (∼1 photoreceptor width, >0.5 arcmin, with dominant frequencies ranging from 70 Hz to 103 Hz (Martinez-Conde, Macknik & Hubel, 2004. Due to OMT’s small amplitude and high frequency, the most accurate and stringent way to record it is the piezoelectric transduction method. Thus, OMT studies are far rarer than those focusing on microsaccades or drift. Here we conducted simultaneous recordings of OMT and microsaccades with a piezoelectric device and a commercial infrared video tracking system. We set out to determine whether OMT could help to restore perceptually faded targets during attempted fixation, and we also wondered whether the piezoelectric sensor could affect the characteristics of microsaccades. Our results showed that microsaccades, but not OMT, counteracted perceptual fading. We moreover found that the piezoelectric sensor affected microsaccades in a complex way, and that the oculomotor system adjusted to the stress brought on by the sensor by adjusting the magnitudes of microsaccades.
Full Text Available This paper describes a master-slave visual surveillance system that uses stationary-dynamic camera assemblies to achieve wide field of view and selective focus of interest. In this system, the fish-eye panoramic camera is capable of monitoring a large area, and the PTZ dome camera has high mobility and zoom ability. In order to achieve the precise interaction, preprocessing spatial calibration between these two cameras is required. This paper introduces a novel calibration approach to automatically calculate a transformation matrix model between two coordinate systems by matching feature points. In addition, a distortion correction method based on Midpoint Circle Algorithm is proposed to handle obvious horizontal distortion in the captured panoramic image. Experimental results using realistic scenes have demonstrated the efficiency and applicability of the system with real-time surveillance.
implementation. The system currently has a bug in that there is no synchronisation between the input frames and the tracked objects reported for each...frame (due to a bug in the third party MPEG decoder). It was therefore necessary to synchronise the reporting with the input frames by hand, and this...algorithms for our VMTI system. References 1. S. Ali and M. Shah. COCOA - tracking in aerial imagery. Proc. Int. Conf. on Computer Vision, Beijing, China
This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition...
include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...
Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences. In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors. Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.
Helen Gail Prosser
Northern Lakes College in north-central Alberta is the first post-secondary institution in Canada to use the Media on Demand digital video system to stream large video files between dispersed locations (Karlsen). Staff and students at distant locations of Northern Lakes College are now viewing more than 350 videos using video streaming technology. This has been made possible by SuperNet, a high capacity broadband network that connects schools, hospitals, libraries and government offices thr...
Full Text Available Shooting free throws plays an important role in basketball. The major problem in performing a correct free throw seems to be inappropriate training. Training is performed offline and it is often not that persistent. The aim of this paper is to consciously modify and control the free throw using biofeedback. Elbow and shoulder dynamics are calculated by an image processing technique equipped with a video image acquisition system. The proposed setup in this paper, named learning control system, is able to quantify and provide feedback of the above parameters in real time as audio signals. Therefore, it yielded to performing a correct learning and conscious control of shooting. Experimental results showed improvements in the free throw shooting style including shot pocket and locked position. The mean values of elbow and shoulder angles were controlled approximately on 89o and 26o, for shot pocket and also these angles were tuned approximately on 180o and 47o respectively for the locked position (closed to the desired pattern of the free throw based on valid FIBA references. Not only the mean values enhanced but also the standard deviations of these angles decreased meaningfully, which shows shooting style convergence and uniformity. Also, in training conditions, the average percentage of making successful free throws increased from about 64% to even 87% after using this setup and in competition conditions the average percentage of successful free throws enhanced about 20%, although using the learning control system may not be the only reason for these outcomes. The proposed system is easy to use, inexpensive, portable and real time applicable.
Lee, Hyun Jeong; Yea, Ji Woon; Oh, Se An
Respiratory-gated radiation therapy (RGRT) has been used to minimize the dose to normal tissue in lung-cancer radiotherapy. The present research aims to improve the regularity of respiration in RGRT by using a video-coached respiration guiding system. In the study, 16 patients with lung cancer were evaluated. The respiration signals of the patients were measured by using a realtime position management (RPM) respiratory gating system (Varian, USA), and the patients were trained using the video-coaching respiration guiding system. The patients performed free breathing and guided breathing, and the respiratory cycles were acquired for ~5 min. Then, Microsoft Excel 2010 software was used to calculate the mean and the standard deviation for each phase. The standard deviation was computed in order to analyze the improvement in the respiratory regularity with respect to the period and the displacement. The standard deviation of the guided breathing decreased to 48.8% in the inhale peak and 24.2% in the exhale peak compared with the values for the free breathing of patient 6. The standard deviation of the respiratory cycle was found to be decreased when using the respiratory guiding system. The respiratory regularity was significantly improved when using the video-coaching respiration guiding system. Therefore, the system is useful for improving the accuracy and the efficiency of RGRT.
Roth, Susan King
In winter of 1993, a design research project was conducted in the Department of Interior Design at Ohio State University by interdisciplinary teams of graduate students from Industrial Design, Industrial Systems and Engineering, Marketing, and Communication. It was, in effect, a course which aimed to apply knowledge from the students' diverse…
Vision is a part of information system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. It is hard to split the entire system apart, and vision mechanisms cannot be completely understood separately from informational processes related to knowledge and intelligence. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Vision is a component of situation awareness, motion and planning systems. Foveal vision provides semantic analysis, recognizing objects in the scene. Peripheral vision guides fovea to salient objects and provides scene context. Biologically inspired Network-Symbolic representation, in which both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, converts visual information into relational Network-Symbolic structures, avoiding precise artificial computations of 3-D models. Network-Symbolic transformations derive more abstract structures that allows for invariant recognition of an object as exemplar of a class and for a reliable identification even if the object is occluded. Systems with such smart vision will be able to navigate in real environment and understand real-world situations.
Vision evolved as a sensory system for reaching, grasping and other motion activities. In advanced creatures, it has become a vital component of situation awareness, navigation and planning systems. Vision is part of a larger information system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. It is hard to split such a system apart. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for natural processing of visual information. It converts visual information into relational Network-Symbolic models, avoiding artificial precise computations of 3-dimensional models. Logic of visual scenes can be captured in such models and used for disambiguation of visual information. Network-Symbolic transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps create unambiguous network-symbolic models. This approach is consistent with NIST RCS. The UGV, equipped with such smart vision, will be able to plan path and navigate in a real environment, perceive and understand complex real-world situations and act accordingly.
Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.
Spin-Neto, Rubens; Matzen, Louise H; Schropp, Lars; Gotfredsen, Erik; Wenzel, Ann
To compare video observation (VO) with a novel three-dimensional registration method, based on an accelerometer-gyroscope (AG) system, to detect patient movement during CBCT examination. The movements were further analyzed according to complexity and patient age. In 181 patients (118 females/63 males; age average 30 years, range: 9-84 years), 206 CBCT examinations were performed, which were video-recorded during examination. An AG was, at the same time, attached to the patient head to track head position in three dimensions. Three observers scored patient movement (yes/no) by VO. AG provided movement data on the x-, y- and z-axes. Thresholds for AG-based registration were defined at 0.5, 1, 2, 3 and 4 mm (movement distance). Movement detected by VO was compared with that registered by AG, according to movement complexity (uniplanar vs multiplanar, as defined by AG) and patient age (≤15, 16-30 and ≥31 years). According to AG, movement ≥0.5 mm was present in 160 (77.7%) examinations. According to VO, movement was present in 46 (22.3%) examinations. One VO-detected movement was not registered by AG. Overall, VO did not detect 71.9% of the movements registered by AG at the 0.5-mm threshold. At a movement distance ≥4 mm, 20% of the AG-registered movements were not detected by VO. Multiplanar movements such as lateral head rotation (72.1%) and nodding/swallowing (52.6%) were more often detected by VO in comparison with uniplanar movements, such as head lifting (33.6%) and anteroposterior translation (35.6%), at the 0.5-mm threshold. The prevalence of patients who move was highest in patients younger than 16 years (64.3% for VO and 92.3% for AG-based registration at the 0.5-mm threshold). AG-based movement registration resulted in a higher prevalence of patient movement during CBCT examination than VO-based registration. Also, AG-registered multiplanar movements were more frequently detected by VO than uniplanar movements. The prevalence of patients who move
Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.
Wax, David B; Hill, Bryan; Levin, Matthew A
Medical hardware and software device interoperability standards are not uniform. The result of this lack of standardization is that information available on clinical devices may not be readily or freely available for import into other systems for research, decision support, or other purposes. We developed a novel system to import discrete data from an anesthesia machine ventilator by capturing images of the graphical display screen and using image processing to extract the data with off-the-shelf hardware and open-source software. We were able to successfully capture and verify live ventilator data from anesthesia machines in multiple operating rooms and store the discrete data in a relational database at a substantially lower cost than vendor-sourced solutions.
Full Text Available In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios.
specifications, or other data does not license the holder or any other person or corporation or convey any rights or permission to manufacture , use , or sell...NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any purpose other than...0730, 20 Feb 2018. 14. ABSTRACT We have developed an automated physiological data-organizing and information-summary system (Critical Care Air
Frisoli, M; Edelhoff, J M; Gersdorff, N; Nicolet, J; Braidot, A; Engelke, W
This study provides a direct comparison between two registration systems used in quantifying mandibular opening movements: two-dimensional videography and electronic axiography, which is used as a reference. A total of 32 volunteers (age: 27.2 ± 6.8 - gender: 17 F - 15 M) participated in the study and repeated a characteristic movement, the frontal Posselt, used in the clinical evaluation of the temporomandibular joint. Frontal Posselt diagrams were reconstructed with the data gathered from both systems, which yielded acceptably similar data. Three commonly assessed parameters were obtained from each diagram and compared. These parameters were: maximum opening, right laterotrusion and left laterotrusion. Both descriptive statistics and the ANOVA test suggested that there was no significant difference between the estimated maximum opening parameter and the reference system (p = 0.217, 95% confidence). Laterotrusion values, on the other hand, appear to be overestimated by videography system and to show greater variability. Two-dimensional videography appears to be a suitable tool with resolution that is adequate for tracing mandibular movements - and opening values, in particular - for screening purposes, long-term observation, and as a quick check for dysfunction as far as frontal plane trajectories are concerned. Reliability and acceptable quality of 2D videography data, acquired in this work, show that it has clear advantages for its wide application in the dental office due to simplicity and low cost for maximum opening measurement given the usefulness of this parameter in the detection of temporomandibular disorders.
Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C.
Over-the-top mobile video streaming is invariably influenced by volatile network conditions which cause playback interruptions (stalling events), thereby impairing users' quality of experience (QoE). Developing models that can accurately predict users' QoE could enable the more efficient design of quality-control protocols for video streaming networks that reduce network operational costs while still delivering high-quality video content to the customers. Existing objective models that predict QoE are based on global video features, such as the number of stall events and their lengths, and are trained and validated on a small pool of ad hoc video datasets, most of which are not publicly available. The model we propose in this work goes beyond previous models as it also accounts for the fundamental effect that a viewer's recent level of satisfaction or dissatisfaction has on their overall viewing experience. In other words, the proposed model accounts for and adapts to the recency, or hysteresis effect caused by a stall event in addition to accounting for the lengths, frequency of occurrence, and the positions of stall events - factors that interact in a complex way to affect a user's QoE. On the recently introduced LIVE-Avvasi Mobile Video Database, which consists of 180 distorted videos of varied content that are afflicted solely with over 25 unique realistic stalling events, we trained and validated our model to accurately predict the QoE, attaining standout QoE prediction performance.
Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok
Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN).
This paper introduces a MAC-layer active dropping scheme to achieve effective resource utilization, which can satisfy the application-layer delay for real-time video streaming in time division multiple access based 4G broadband wireless access networks. When a video frame is not likely to be reconstructed within the application-layer delay bound at a receiver for the minimum decoding requirement, the MAC-layer protocol data units of such video frame will be proactively dropped before the transmission. An analytical model is developed to evaluate how confident a video frame can be delivered within its application-layer delay bound by jointly considering the effects of time-varying wireless channel, minimum decoding requirement of each video frame, data retransmission, and playback buffer. Extensive simulations with video traces are conducted to prove the effectiveness of the proposed scheme. When compared to conventional cross-layer schemes using prioritized-transmission/retransmission, the proposed scheme is practically implementable for more effective resource utilization, avoiding delay propagation, and achieving better video qualities under certain conditions.
Boozer, G. A.; Mckibbin, D. D.; Haas, M. R.; Erickson, E. F.
This simulator was created so that C-141 Kuiper Airborne Observatory investigators could test their Airborne Data Acquisition and Management System software on a system which is generally more accessible than the ADAMS on the plane. An investigator can currently test most of his data acquisition program using the data computer simulator in the Cave. (The Cave refers to the ground-based computer facilities for the KAO and the associated support personnel.) The main Cave computer is interfaced to the data computer simulator in order to simulate the data-Exec computer communications. However until now, there has been no way to test the data computer interface to the tracker. The simulator described here simulates both the KAO Exec and tracker computers with software which runs on the same Hewlett-Packard (HP) computer as the investigator's data acquisition program. A simulator control box is hardwired to the computer to provide monitoring of tracker functions, to provide an operator panel similar to the real tracker, and to simulate the 180 deg phase shifting of the chopper squre-wave reference with beam switching. If run in the Cave, one can use their Exec simulator and this tracker simulator.
Full Text Available Micro-expressions play an essential part in understanding non-verbal communication and deceit detection. They are involuntary, brief facial movements that are shown when a person is trying to conceal something. Automatic analysis of micro-expression is challenging due to their low amplitude and to their short duration (they occur as fast as 1/15 to 1/25 of a second. We propose a fully micro-expression analysis system consisting of a high-speed image acquisition setup and a software framework which can detect the frames when the micro-expressions occurred as well as determine the type of the emerged expression. The detection and classification methods use fast and simple motion descriptors based on absolute image differences. The recognition module it only involves the computation of several 2D Gaussian probabilities. The software framework was tested on two publicly available high speed micro-expression databases and the whole system was used to acquire new data. The experiments we performed show that our solution outperforms state of the art works which use more complex and computationally intensive descriptors.
Schnell, Norbert; Saiz, Victor; Barkati, Karim; Goldszmidt, Samuel
Full Text Available A user’s position-specific field has been developed using the Global Positioning System (GPS technology. To determine the position using cellular phones, a device was developed, in which a pedestrian navigation unit carries the GPS. However, GPS cannot specify a position in a subterranean environment or indoors, which is beyond the reach of transmitted signals. In addition, the position-specification precision of GPS, that is, its resolution, is on the order of several meters, which is deemed insufficient for pedestrians. In this study, we proposed and evaluated a technique for locating a user’s 3D position by setting up a marker in the navigation space detected in the image of a cellular phone. By experiment, we verified the effectiveness and accuracy of the proposed method. Additionally, we improved the positional precision because we measured the position distance using numerous markers.
Russo, Paolo; Gualdi-Russo, Emanuela; Pellegrinelli, Alberto; Balboni, Juri; Furini, Alessio
Using an interdisciplinary approach the authors demonstrate the possibility to obtain reliable anthropometric data of a subject by means of a new video surveillance system. In general the use of current video surveillance systems provides law enforcement with useful data to solve many crimes. Unfortunately the quality of the images and the way in which they are taken often makes it very difficult to judge the compatibility between suspect and perpetrator. In this paper, the authors present the results obtained with a low-cost photogrammetric video surveillance system based on a pair of common surveillance cameras synchronized with each other. The innovative aspect of the system is that it allows estimation with considerable accuracy not only of body height (error 0.1-3.1cm, SD 1.8-4.5cm) but also of other anthropometric characters of the subject, consequently with better determination of the biological profile and greatly increased effectiveness of the judgment of compatibility. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Arriaga, Patrícia; Esteves, Francisco; Carneiro, Paula; Monteiro, Maria Benedicta
This study was conducted to analyze the short-term effects of violent electronic games, played with or without a virtual reality (VR) device, on the instigation of aggressive behavior. Physiological arousal (heart rate (HR)), priming of aggressive thoughts, and state hostility were also measured to test their possible mediation on the relationship between playing the violent game (VG) and aggression. The participants--148 undergraduate students--were randomly assigned to four treatment conditions: two groups played a violent computer game (Unreal Tournament), and the other two a non-violent game (Motocross Madness), half with a VR device and the remaining participants on the computer screen. In order to assess the game effects the following instruments were used: a BIOPAC System MP100 to measure HR, an Emotional Stroop task to analyze the priming of aggressive and fear thoughts, a self-report State Hostility Scale to measure hostility, and a competitive reaction-time task to assess aggressive behavior. The main results indicated that the violent computer game had effects on state hostility and aggression. Although no significant mediation effect could be detected, regression analyses showed an indirect effect of state hostility between playing a VG and aggression. Copyright 2008 Wiley-Liss, Inc.
Saavedra M., Antonio; Pezoa, Jorge E.; Zarkesh-Ha, Payman; Figueroa, Miguel
We propose a platform for robust face recognition in Infrared (IR) images using Compressive Sensing (CS). In line with CS theory, the classification problem is solved using a sparse representation framework, where test images are modeled by means of a linear combination of the training set. Because the training set constitutes an over-complete dictionary, we identify new images by finding their sparsest representation based on the training set, using standard l1-minimization algorithms. Unlike conventional face-recognition algorithms, we feature extraction is performed using random projections with a precomputed binary matrix, as proposed in the CS literature. This random sampling reduces the effects of noise and occlusions such as facial hair, eyeglasses, and disguises, which are notoriously challenging in IR images. Thus, the performance of our framework is robust to these noise and occlusion factors, achieving an average accuracy of approximately 90% when the UCHThermalFace database is used for training and testing purposes. We implemented our framework on a high-performance embedded digital system, where the computation of the sparse representation of IR images was performed by a dedicated hardware using a deeply pipelined architecture on an Field-Programmable Gate Array (FPGA).
SCHEER, KRISTA S.; SIEBRANT, SARAH M.; Brown, Gregory A.; Brandon S. Shaw; Shaw, Ina
Nintendo Wii, Sony Playstation Move, and Microsoft XBOX Kinect are home video gaming systems that involve player movement to control on-screen game play. Numerous investigations have demonstrated that playing Wii is moderate physical activity at best, but Move and Kinect have not been as thoroughly investigated. The purpose of this study was to compare heart rate, oxygen consumption, and ventilation while playing the games Wii Boxing, Kinect Boxing, and Move Gladiatorial Combat. Heart rate, o...
OFarrell, Zachary L.
Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.
Ivory, James D; Williams, Dmitri; Martins, Nicole; Consalvo, Mia
Although violent video game content and its effects have been examined extensively by empirical research, verbal aggression in the form of profanity has received less attention. Building on preliminary findings from previous studies, an extensive content analysis of profanity in video games was conducted using a sample of the 150 top-selling video games across all popular game platforms (including home consoles, portable consoles, and personal computers). The frequency of profanity, both in general and across three profanity categories, was measured and compared to games' ratings, sales, and platforms. Generally, profanity was found in about one in five games and appeared primarily in games rated for teenagers or above. Games containing profanity, however, tended to contain it frequently. Profanity was not found to be related to games' sales or platforms.
Helen Gail Prosser
Full Text Available Northern Lakes College in north-central Alberta is the first post-secondary institution in Canada to use the Media on Demand digital video system to stream large video files between dispersed locations (Karlsen. Staff and students at distant locations of Northern Lakes College are now viewing more than 350 videos using video streaming technology. This has been made possible by SuperNet, a high capacity broadband network that connects schools, hospitals, libraries and government offices throughout the province of Alberta (Alberta SuperNet. This article describes the technical process of implementing video streaming at Northern Lakes College from March 2005 until March 2006.
Gregory J Barord
Full Text Available The extant species of Nautilus and Allonautilus (Cephalopoda inhabit fore-reef slope environments across a large geographic area of the tropical western Pacific and eastern Indian Oceans. While many aspects of their biology and behavior are now well-documented, uncertainties concerning their current populations and ecological role in the deeper, fore-reef slope environments remain. Given the historical to current day presence of nautilus fisheries at various locales across the Pacific and Indian Oceans, a comparative assessment of the current state of nautilus populations is critical to determine whether conservation measures are warranted. We used baited remote underwater video systems (BRUVS to make quantitative photographic records as a means of estimating population abundance of Nautilus sp. at sites in the Philippine Islands, American Samoa, Fiji, and along an approximately 125 km transect on the fore reef slope of the Great Barrier Reef from east of Cairns to east of Lizard Island, Australia. Each site was selected based on its geography, historical abundance, and the presence (Philippines or absence (other sites of Nautilus fisheries The results from these observations indicate that there are significantly fewer nautiluses observable with this method in the Philippine Islands site. While there may be multiple possibilities for this difference, the most parsimonious is that the Philippine Islands population has been reduced due to fishing. When compared to historical trap records from the same site the data suggest there have been far more nautiluses at this site in the past. The BRUVS proved to be a valuable tool to measure Nautilus abundance in the deep sea (300-400 m while reducing our overall footprint on the environment.
Wadwekar, Vaibhav; Nair, Pradeep Pankajakshan; Murgai, Aditya; Thirunavukkarasu, Sibi; Thazhath, Harichandrakumar Kottyen
Different studies have described useful signs to diagnose psychogenic non-epileptic seizure (PNES). A few authors have tried to describe the semiologic groups among PNES patients; each group consisting of combination of features. But there is no uniformity of nomenclature among these studies. Our aim was to find out whether the objective classification system proposed by Hubsch et al. was useful and adequate to classify PNES patient population from South India. We retrospectively analyzed medical records and video EEG monitoring data of patients, recorded during 3 year period from June 2010 to July 2013. We observed the semiologic features of each PNES episode and tried to group them strictly adhering to Hubsch et al. classification. Minor modifications were made to include patients who were left unclassified. A total of 65 patients were diagnosed to have PNES during this period, out of which 11 patients were excluded due to inadequate data. We could classify 42(77.77%) patients without modifying the defining criteria of the Hubsch et al. groups. With minor modification we could classify 94.96% patients. The modified groups with patient distribution are as follows: Class 1--dystonic attacks with primitive gestural activities [3(5.6%)]. Class 2 – paucikinetic attacks with or without preserved responsiveness [5(9.3%)]. Class 3--pseudosyncope with or without hyperventilation [21(38.9%)]. Class 4--hyperkinetic prolonged attacks with hyperventilation, involvement of limbs and/or trunk [14(25.9%)]. Class 5--axial dystonic attacks [8(14.8%)]. Class 6--unclassified type [3(5.6%)]. This study demonstrates that the Hubsch's classification with minor modifications is useful and adequate to classify PNES patients from South India. Copyright © 2013 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.
Ghionis, George; Trygonis, Vassilis; Karydis, Antonis; Vousdoukas, Michalis; Alexandrakis, George; Drakopoulos, Panos; Amdreadis, Olympos; Psarros, Fotis; Velegrakis, Antonis; Poulos, Serafim
Effective beach management requires environmental assessments that are based on sound science, are cost-effective and are available to beach users and managers in an accessible, timely and transparent manner. The most common problems are: 1) The available field data are scarce and of sub-optimal spatio-temporal resolution and coverage, 2) our understanding of local beach processes needs to be improved in order to accurately model/forecast beach dynamics under a changing climate, and 3) the information provided by coastal scientists/engineers in the form of data, models and scientific interpretation is often too complicated to be of direct use by coastal managers/decision makers. A multispectral video system has been developed, consisting of one or more video cameras operating in the visible part of the spectrum, a passive near-infrared (NIR) camera, an active NIR camera system, a thermal infrared camera and a spherical video camera, coupled with innovative image processing algorithms and a telemetric system for the monitoring of coastal environmental parameters. The complete system has the capability to record, process and communicate (in quasi-real time) high frequency information on shoreline position, wave breaking zones, wave run-up, erosion hot spots along the shoreline, nearshore wave height, turbidity, underwater visibility, wind speed and direction, air and sea temperature, solar radiation, UV radiation, relative humidity, barometric pressure and rainfall. An innovative, remotely-controlled interactive visual monitoring system, based on the spherical video camera (with 360°field of view), combines the video streams from all cameras and can be used by beach managers to monitor (in real time) beach user numbers, flow activities and safety at beaches of high touristic value. The high resolution near infrared cameras permit 24-hour monitoring of beach processes, while the thermal camera provides information on beach sediment temperature and moisture, can
Full Text Available H.264 delivers the streaming video in high quality for various applications. The coding tools involved in H.264, however, make its video codec implementation very complicated, raising the need for algorithm optimization, and hardware acceleration. In this paper, a novel adaptive crossed quarter polar pattern search (ACQPPS algorithm is proposed to realize an enhanced inter prediction for H.264. Moreover, an efficient prototyping system-on-platform architecture is also presented, which can be utilized for a realization of H.264 baseline profile encoder with the support of integrated ACQPPS motion estimator and related video IP accelerators. The implementation results show that ACQPPS motion estimator can achieve very high estimated image quality comparable to that from the full search method, in terms of peak signal-to-noise ratio (PSNR, while keeping the complexity at an extremely low level. With the integrated IP accelerators and optimized techniques, the proposed system-on-platform architecture sufficiently supports the H.264 real-time encoding with the low cost.
Gilliland, M. G.; Rougelot, R. S.; Schumaker, R. A.
Video signal processor uses special-purpose integrated circuits with nonsaturating current mode switching to accept texture and color information from a digital computer in a visual spaceflight simulator and to combine these, for display on color CRT with analog information concerning fading.
A method for creating and presenting video-recorded synchronized audiovisual stimuli at a high frame rate-which would be highly useful for psychophysical studies on, for example, just-noticeable differences and gating-is presented. Methods for accomplishing this include recording audio and video separately using an exact synchronization signal, editing the recordings and finding exact synchronization points, and presenting the synchronized audiovisual stimuli with a desired frame rate on a cathode ray tube display using MATLAB and Psychophysics Toolbox 3. The methods from an empirical gating study (Moradi, Lidestam, & Rönnberg, Frontiers in Psychology 4:359, 2013) are presented as an example of the implementation of playback at 120 fps.
Ding, Renquan; Tong, Xiangdong; Xu, Shiguang; Zhang, Dakun; Gao, Xin; Teng, Hong; Qu, Jiaqi; Wang, Shumin
In recent years, Da Vinci robot system applied in the treatment of intrathoracic surgery mediastinal diseases become more mature. The aim of this study is to summarize the clinical data about mediastinal lesions of General Hospital of Shenyang Military Region in the past 4 years, then to analyze the treatment effect and promising applications of da Vinci robot system in the surgical treatment of mediastinal lesions. 203 cases of mediastinal lesions were collected from General Hospital of Shenyang Military Region between 2010 and 2013. These patients were divided into two groups da Vinci and video-assisted thoracoscopic surgery (VATS) according to the selection of the treatments. The time in surgery, intraoperative blood loss, postoperative drainage amount within three days after surgery, the period of bearing drainage tubes, hospital stays and hospitalization expense were then compared. All patients were successfully operated, the postoperative recovery is good and there is no perioperative death. The different of the time in surgery between two groups is Robots group 82 (20-320) min and thoracoscopic group 89 (35-360) min (P>0.05). The intraoperative blood loss between two groups is robot group 10 (1-100) mL and thoracoscopic group 50 (3-1,500) mL. The postoperative drainage amount within three days after surgery between two groups is robot group 215 (0-2,220) mL and thoracoscopic group 350 (50-1,810) mL. The period of bearing drainage tubes after surgery between two groups is robot group 3 (0-10) d and thoracoscopic group: 5 (1-18) d. The difference of hospital stays between two groups is robot group 7 (2-15) d and thoracoscopic group 9 (2-50) d. The hospitalization expense between two groups is robot group (18,983.6±4,461.2) RMB and thoracoscopic group (9,351.9±2,076.3) RMB (All Pvideo-assisted thoracoscopic approach, even though its expense is higher.
Full Text Available ... NEI YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration ... Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: ...
Full Text Available ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...
... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...
Full Text Available ... Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed ... Activity Role of Body Weight in Osteoarthritis Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic ...
Chaves, Rafael Oliveira; de Oliveira, Pedro Armando Valente; Rocha, Luciano Chaves; David, Joacy Pedro Franco; Ferreira, Sanmari Costa; Santos, Alex de Assis Santos Dos; Melo, Rômulo Müller Dos Santos; Yasojima, Edson Yuzur; Brito, Marcus Vinicius Henriques
In order to engage medical students and residents from public health centers to utilize the telemedicine features of surgery on their own smartphones and tablets as an educational tool, an innovative streaming system was developed with the purpose of streaming live footage from open surgeries to smartphones and tablets, allowing the visualization of the surgical field from the surgeon's perspective. The current study aims to describe the results of an evaluation on level 1 of Kirkpatrick's Model for Evaluation of the streaming system usage during gynecological surgeries, based on the perception of medical students and gynecology residents. Consisted of a live video streaming (from the surgeon's point of view) of gynecological surgeries for smartphones and tablets, one for each volunteer. The volunteers were able to connect to the local wireless network, created by the streaming system, through an access password and watch the video transmission on a web browser on their smartphones. Then, they answered a Likert-type questionnaire containing 14 items about the educational applicability of the streaming system, as well as comparing it to watching an in loco procedure. This study is formally approved by the local ethics commission (Certificate No. 53175915.7.0000.5171/2016). Twenty-one volunteers participated, totalizing 294 items answered, in which 94.2% were in agreement with the items affirmative, 4.1% were neutral, and only 1.7% answers corresponded to negative impressions. Cronbach's α was .82, which represents a good reliability level. Spearman's coefficients were highly significant in 4 comparisons and moderately significant in the other 20 comparisons. This study presents a local streaming video system of live surgeries to smartphones and tablets and shows its educational utility, low cost, and simple usage, which offers convenience and satisfactory image resolution, thus being potentially applicable in surgical teaching.
Scatena, Michele; Dittoni, Serena; Maviglia, Riccardo; Frusciante, Roberto; Testani, Elisa; Vollono, Catello; Losurdo, Anna; Colicchio, Salvatore; Gnoni, Valentina; Labriola, Claudio; Farina, Benedetto; Pennisi, Mariano Alberto; Della Marca, Giacomo
The aim of the present study was to develop and validate a software tool for the detection of movements during sleep, based on the automated analysis of video recordings. This software is aimed to detect and quantify movements and to evaluate periods of sleep and wake. We applied an open-source software, previously distributed on the web (Zoneminder, ZM), meant for video surveillance. A validation study was performed: computed movement analysis was compared with two standardised, 'gold standard' methods for the analysis of sleep-wake cycles: actigraphy and laboratory-based video-polysomnography. Sleep variables evaluated by ZM were not different from those measured by traditional sleep-scoring systems. Bland-Altman plots showed an overlap between the scores obtained with ZM, PSG and actigraphy, with a slight tendency of ZM to overestimate nocturnal awakenings. ZM showed a good degree of accuracy both with respect to PSG (79.9%) and actigraphy (83.1%); and had very high sensitivity (ZM vs. PSG: 90.4%; ZM vs. actigraphy: 89.5%) and relatively lower specificity (ZM vs. PSG: 42.3%; ZM vs. actigraphy: 65.4%). The computer-assisted motion analysis is reliable and reproducible, and it can allow a reliable esteem of some sleep and wake parameters. The motion-based sleep analysis shows a trend to overestimate wakefulness. The possibility to measure sleep from video recordings may be useful in those clinical and experimental conditions in which traditional PSG studies may not be performed. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer
We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...
Ostrowski, Jeffrey R.; Sarhan, Nabil J.
The popularity of social media has grown dramatically over the World Wide Web. In this paper, we analyze the video popularity distribution of well-known social video websites (YouTube, Google Video, and the AOL Truveo Video Search engine) and characterize their workload. We identify trends in the categories, lengths, and formats of those videos, as well as characterize the evolution of those videos over time. We further provide an extensive analysis and comparison of video content amongst the main regions of the world.
Mills, George T.
In a video system, a video signal operates at a vertical scan rate to generate a first video frame characterized by a first number of lines per frame. A method and apparatus are provided to convert the first video frame into a second video frame characterized by a second number of lines per frame. The first video frame is stored at the vertical scan rate as digital samples. A portion of the stored digital samples from each line of the first video frame are retrieved at the vertical scan rate. The number of digital samples in the retrieved portion from each line of the first video frame is governed by a ratio equal to the second number divided by the first number, such that the retrieved portion from the first video frame is the second video frame.
Lazar, Aurel A; Pnevmatikakis, Eftychios A
We investigate architectures for time encoding and time decoding of visual stimuli such as natural and synthetic video streams (movies, animation). The architecture for time encoding is akin to models of the early visual system. It consists of a bank of filters in cascade with single-input multi-output neural circuits. Neuron firing is based on either a threshold-and-fire or an integrate-and-fire spiking mechanism with feedback. We show that analog information is represented by the neural circuits as projections on a set of band-limited functions determined by the spike sequence. Under Nyquist-type and frame conditions, the encoded signal can be recovered from these projections with arbitrary precision. For the video time encoding machine architecture, we demonstrate that band-limited video streams of finite energy can be faithfully recovered from the spike trains and provide a stable algorithm for perfect recovery. The key condition for recovery calls for the number of neurons in the population to be above a threshold value.
Höferlin, Markus Johannes
The amount of video data recorded world-wide is tremendously growing and has already reached hardly manageable dimensions. It originates from a wide range of application areas, such as surveillance, sports analysis, scientific video analysis, surgery documentation, and entertainment, and its analysis represents one of the challenges in computer science. The vast amount of video data renders manual analysis by watching the video data impractical. However, automatic evaluation of video material...
Fornari, D.; Howland, J.; Lerner, S.; Gegg, S.; Walden, B.; Bowen, A.; Lamont, M.; Kelley, D.
. We report on an archiving effort to transfer video footage currently on Hi-8 and VHS tape to digital media (DVD). At the same time as this is being done, frame grab imagery at reasonable resolution (640x480) at 30 sec. intervals will be compiled and the images will be integrated, as much as possible with vehicle attitude/navigation data and provided to the user community in a web-browser format, such as has already been done for the recent Jason and Alvin frame grabbed imagery. The frame-grabbed images will be tagged with time, thereby permitting integration of vehicle attitude and navigation data once that is available. In order to prototype this system, we plan to utilize data from the East Pacific Rise and Juan de Fuca Ridge which are field areas selected by the community as Ridge2000 Integrated Study Sites. There are over 500 Alvin dives in both these areas and having frame-grabbed, synoptic views of the terrains covered during those dives will be invaluable for scientific and outreach use as part of Ridge2000. We plan to coordinate this activity with the Ridge2000 Data Management Office at LDEO.
A.J. Jansen (Jack); D.C.A. Bulterman (Dick)
htmlabstractThe complexities and physical constraints associated with video transmission make the introduction of video playout delays unavoidable. Tuning systems to reduce delay requires an ability to effectively and easily gather delay metrics on a potentially wide range of systems. In order to
Full Text Available INTRODUCTION: Half of fatal injuries among bicyclists are head injuries. While helmet use is likely to provide protection, their use often remains rare. We assessed the influence of strategies for promotion of helmet use with direct observation of behaviour by a semi-automatic video system. METHODS: We performed a single-centre randomised controlled study, with 4 balanced randomisation groups. Participants were non-helmet users, aged 18-75 years, recruited at a loan facility in the city of Bordeaux, France. After completing a questionnaire investigating their attitudes towards road safety and helmet use, participants were randomly assigned to three groups with the provision of "helmet only", "helmet and information" or "information only", and to a fourth control group. Bikes were labelled with a colour code designed to enable observation of helmet use by participants while cycling, using a 7-spot semi-automatic video system located in the city. A total of 1557 participants were included in the study. RESULTS: Between October 15th 2009 and September 28th 2010, 2621 cyclists' movements, made by 587 participants, were captured by the video system. Participants seen at least once with a helmet amounted to 6.6% of all observed participants, with higher rates in the two groups that received a helmet at baseline. The likelihood of observed helmet use was significantly increased among participants of the "helmet only" group (OR = 7.73 [2.09-28.5] and this impact faded within six months following the intervention. No effect of information delivery was found. CONCLUSION: Providing a helmet may be of value, but will not be sufficient to achieve high rates of helmet wearing among adult cyclists. Integrated and repeated prevention programmes will be needed, including free provision of helmets, but also information on the protective effect of helmets and strategies to increase peer and parental pressure.
Constant, Aymery; Messiah, Antoine; Felonneau, Marie-Line; Lagarde, Emmanuel
Half of fatal injuries among bicyclists are head injuries. While helmet use is likely to provide protection, their use often remains rare. We assessed the influence of strategies for promotion of helmet use with direct observation of behaviour by a semi-automatic video system. We performed a single-centre randomised controlled study, with 4 balanced randomisation groups. Participants were non-helmet users, aged 18-75 years, recruited at a loan facility in the city of Bordeaux, France. After completing a questionnaire investigating their attitudes towards road safety and helmet use, participants were randomly assigned to three groups with the provision of "helmet only", "helmet and information" or "information only", and to a fourth control group. Bikes were labelled with a colour code designed to enable observation of helmet use by participants while cycling, using a 7-spot semi-automatic video system located in the city. A total of 1557 participants were included in the study. Between October 15th 2009 and September 28th 2010, 2621 cyclists' movements, made by 587 participants, were captured by the video system. Participants seen at least once with a helmet amounted to 6.6% of all observed participants, with higher rates in the two groups that received a helmet at baseline. The likelihood of observed helmet use was significantly increased among participants of the "helmet only" group (OR = 7.73 [2.09-28.5]) and this impact faded within six months following the intervention. No effect of information delivery was found. Providing a helmet may be of value, but will not be sufficient to achieve high rates of helmet wearing among adult cyclists. Integrated and repeated prevention programmes will be needed, including free provision of helmets, but also information on the protective effect of helmets and strategies to increase peer and parental pressure.
Roel, May; Hamre, Oeyvind; Vang, Roald; Nygaard, Torgeir
Collisions between birds and wind turbines can be a problem at wind-power plants both onshore and offshore, and the presence of endangered bird species or proximity to key functional bird areas can have major impact on the choice of site or location wind turbines. There is international consensus that one of the mail challenges in the development of measures to reduce bird collisions is the lack of good methods for assessment of the efficacy of inventions. In order to be better abe to assess the efficacy of mortality-reducing measures Statkraft wishes to find a system that can be operated under Norwegian conditions and that renders objective and quantitative information on collisions and near-flying birds. DTbird developed by Liquen Consultoria Ambiental S.L. is such a system, which is based on video-recording bird flights near turbines during the daylight period (light levels>200 lux). DTBird is a self-working system developed to detect flying birds and to take programmed actions (i.e. warming, dissuasion, collision registration, and turbine stop control) linked to real-time bird detection. This report evaluates how well the DTBird system is able to detect birds in the vicinity of a wind turbine, and assess to which extent it can be utilized to study near-turbine bird flight behaviour and possible deterrence. The evaluation was based on the video sequence recorded with the DTBird systems installed at turbine 21 and turbine 42 at the Smoela wind-power plant between March 2 2012 and September 30 2012, together with GPS telemetry data on white-tailed eagles and avian radar data. The average number of falsely triggered video sequences (false positive rate) was 1.2 per day, and during daytime the DTBird system recorded between 76% and 96% of all bird flights in the vicinity of the turbines. Visually estimated distances of recorded bird flights in the video sequences were in general assessed to be farther from the turbines com pared to the distance settings used within
Michail N. Giannakos
Full Text Available Online video lectures have been considered an instructional media for various pedagogic approaches, such as the flipped classroom and open online courses. In comparison to other instructional media, online video affords the opportunity for recording student clickstream patterns within a video lecture. Video analytics within lecture videos may provide insights into student learning performance and inform the improvement of video-assisted teaching tactics. Nevertheless, video analytics are not accessible to learning stakeholders, such as researchers and educators, mainly because online video platforms do not broadly share the interactions of the users with their systems. For this purpose, we have designed an open-access video analytics system for use in a video-assisted course. In this paper, we present a longitudinal study, which provides valuable insights through the lens of the collected video analytics. In particular, we found that there is a relationship between video navigation (repeated views and the level of cognition/thinking required for a specific video segment. Our results indicated that learning performance progress was slightly improved and stabilized after the third week of the video-assisted course. We also found that attitudes regarding easiness, usability, usefulness, and acceptance of this type of course remained at the same levels throughout the course. Finally, we triangulate analytics from diverse sources, discuss them, and provide the lessons learned for further development and refinement of video-assisted courses and practices.
Zhou, Wensheng; Shen, Ye; Vellaikal, Asha; Kuo, C.-C. Jay
Many multimedia applications, such as multimedia data management systems and communication systems, require efficient representation of multimedia content. Thus semantic interpretation of video content has been a popular research area. Currently, most content-based video representation involves the segmentation of video based on key frames which are generated using scene change detection techniques as well as camera/object motion. Then, video features can be extracted from key frames. However most of such research performs off-line video processing in which the whole video scope is known as a priori which allows multiple scans of the stored video files during video processing. In comparison, relatively not much research has been done in the area of on-line video processing, which is crucial in video communication applications such as on-line collaboration, news broadcasts and so on. Our research investigates on-line real-time scene change detection of multicast video over the Internet. Our on-line processing system are designed to meet the requirements of real-time video multicasting over the Internet and to utilize the successful video parsing techniques available today. The proposed algorithms extract key frames from video bitstreams sent through the MBone network, and the extracted key frames are multicasted as annotations or metadata over a separate channel to assist in content filtering such as those anticipated to be in use by on-line filtering proxies in the Internet. The performance of the proposed algorithms are demonstrated and discussed in this paper.
Whitehead, Stuart; Rush, Joshua
Logo PDF files should be accessible by any PDF reader such as Adobe Reader. SVG files of the logo are vector graphics accessible by programs such as Inkscape or Adobe Illustrator. PNG files are image files of the logo that should be able to be opened by any operating system's default image viewer. The final report is submitted in both .doc (Microsoft Word) and .pdf formats. The video is submitted in .avi format and can be viewed with Windows Media Player or VLC. Audio .wav files are also ...
Full Text Available Modern trends in crime control include a variety of technological innovations, including video surveillance systems. The aim of this paper is to review the implementation of video surveillance in contemporary context, considering fundamental theoretical aspects, the legislation and the effectiveness in controlling crime. While considering the theoretical source of ideas on the implementation of video surveillance, priority was given to the concept of situational prevention that focuses on the contextual factors of crime. Capacities for the implementation of video surveillance in Serbia are discussed based on the analysis of the relevant international and domestic legislation, the shortcomings in regulation of this area and possible solutions. Special attention was paid to the effectiveness of video surveillance in public places, in schools and prisons. Starting from the results of studies of video surveillance effectiveness, strengths and weaknesses of these measures and recommendations for improving practice were discussed.
Guruvadoo, Eranna K.
In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.
David I. King; Richard M. DeGraaf; Paul J. Champlin; Tracey B. Champlin
Video monitoring of active bird nests is gaining popularity among researchers because it eliminates many of the biases associated with reliance on incidental observations of predation events or use of artificial nests, but the expense of video systems may be prohibitive. Also, the range and efficiency of current video monitoring systems may be limited by the need to...
Verma, Brijesh; Stockwell, David
This book highlights the methods and applications for roadside video data analysis, with a particular focus on the use of deep learning to solve roadside video data segmentation and classification problems. It describes system architectures and methodologies that are specifically built upon learning concepts for roadside video data processing, and offers a detailed analysis of the segmentation, feature extraction and classification processes. Lastly, it demonstrates the applications of roadside video data analysis including scene labelling, roadside vegetation classification and vegetation biomass estimation in fire risk assessment.
Sylvette R. Wiener-Vacher
Full Text Available The video head impulse test (VHIT is widely used to identify semicircular canal function impairments in adults. But classical VHIT testing systems attach goggles tightly to the head, which is not tolerated by infants. Remote video detection of head and eye movements resolves this issue and, here, we report VHIT protocols and normative values for children. Vestibulo-ocular reflex (VOR gain was measured for all canals of 303 healthy subjects, including 274 children (aged 2.6 months–15 years and 26 adults (aged 16–67. We used the Synapsys® (Marseilles, France VHIT Ulmer system whose remote camera measures head and eye movements. HITs were performed at high velocities. Testing typically lasts 5–10 min. In infants as young as 3 months old, VHIT yielded good inter-measure replicability. VOR gain increases rapidly until about the age of 6 years (with variation among canals, then progresses more slowly to reach adult values by the age of 16. Values are more variable among very young children and for the vertical canals, but showed no difference for right versus left head rotations. Normative values of VOR gain are presented to help detect vestibular impairment in patients. VHIT testing prior to cochlear implants could help prevent total vestibular loss and the resulting grave impairments of motor and cognitive development in patients with residual unilateral vestibular function.
Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team
A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.
Full Text Available Abstract Background Cell motility is a critical parameter in many physiological as well as pathophysiological processes. In time-lapse video microscopy, manual cell tracking remains the most common method of analyzing migratory behavior of cell populations. In addition to being labor-intensive, this method is susceptible to user-dependent errors regarding the selection of "representative" subsets of cells and manual determination of precise cell positions. Results We have quantitatively analyzed these error sources, demonstrating that manual cell tracking of pancreatic cancer cells lead to mis-calculation of migration rates of up to 410%. In order to provide for objective measurements of cell migration rates, we have employed multi-target tracking technologies commonly used in radar applications to develop fully automated cell identification and tracking system suitable for high throughput screening of video sequences of unstained living cells. Conclusion We demonstrate that our automatic multi target tracking system identifies cell objects, follows individual cells and computes migration rates with high precision, clearly outperforming manual procedures.
Wickramasuriya, Jehan; Alhazzazi, Mohanned; Datt, Mahesh; Mehrotra, Sharad; Venkatasubramanian, Nalini
Forms of surveillance are very quickly becoming an integral part of crime control policy, crisis management, social control theory and community consciousness. In turn, it has been used as a simple and effective solution to many of these problems. However, privacy-related concerns have been expressed over the development and deployment of this technology. Used properly, video cameras help expose wrongdoing but typically come at the cost of privacy to those not involved in any maleficent activity. This work describes the design and implementation of a real-time, privacy-protecting video surveillance infrastructure that fuses additional sensor information (e.g. Radio-frequency Identification) with video streams and an access control framework in order to make decisions about how and when to display the individuals under surveillance. This video surveillance system is a particular instance of a more general paradigm of privacy-protecting data collection. In this paper we describe in detail the video processing techniques used in order to achieve real-time tracking of users in pervasive spaces while utilizing the additional sensor data provided by various instrumented sensors. In particular, we discuss background modeling techniques, object tracking and implementation techniques that pertain to the overall development of this system.
Full Text Available ... support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ... group for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support ...
Bourgonjon, Jeroen; Soetaert, Ronald
... by exploring a particular aspect of digitization that affects young people, namely video games. They explore the new social spaces which emerge in video game culture and how these spaces relate to community building and citizenship...
Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles News Resources Links Videos Podcasts Webinars For the ... Doctor Find a Provider Meet the Team Blog Articles News Provider Directory Donate Resources Links Videos Podcasts ...
Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos ...
questions of our media literacy pertaining to authoring multimodal texts (visual, verbal, audial, etc.) in research practice and the status of multimodal texts in academia. The implications of academic video extend to wider issues of how researchers harness opportunities to author different types of texts......Is video becoming “the new black” in academia, if so, what are the challenges? The integration of video in research methodology (for collection, analysis) is well-known, but the use of “academic video” for dissemination is relatively new (Eriksson and Sørensen). The focus of this paper is academic...... video, or short video essays produced for the explicit purpose of communicating research processes, topics, and research-based knowledge (see the journal of academic videos: www.audiovisualthinking.org). Video is increasingly used in popular showcases for video online, such as YouTube and Vimeo, as well...