WorldWideScience

Sample records for video display system

  1. Real-time embedded system for stereo video processing for multiview displays

    Science.gov (United States)

    Berretty, R.-P. M.; Riemens, A. K.; Machado, P. F.

    2007-02-01

    In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview auto-stereoscopic displays are entering the market. Such displays offer various views at the same time. Depending on its positions, the viewers' eyes see different images. Hence, the viewers' left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. New auto-stereoscopic products use an image-plus-depth interface. On the other hand, a growing number of 3D productions from the entertainment industry use a stereo format. In this paper, we show how to compute depth from the stereo signal to comply with the display interface format. Furthermore, we present a realisation suitable for a real-time cost-effective implementation on an embedded media processor.

  2. A low-cost system for graphical process monitoring with colour video symbol display units

    International Nuclear Information System (INIS)

    Grauer, H.; Jarsch, V.; Mueller, W.

    1977-01-01

    A system for computer controlled graphic process supervision, using color symbol video displays is described. It has the following characteristics: - compact unit: no external memory for image storage - problem oriented simple descriptive cut to the process program - no restriction of the graphical representation of process variables - computer and display independent, by implementation of colours and parameterized code creation for the display. (WB) [de

  3. Virtual displays for 360-degree video

    Science.gov (United States)

    Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.

    2012-03-01

    In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.

  4. Internet Protocol Display Sharing Solution for Mission Control Center Video System

    Science.gov (United States)

    Brown, Michael A.

    2009-01-01

    With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole

  5. Processing Decoded Video for LCD-LED Backlight Display

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan

    The quality of digital images and video signal on visual media such as TV screens and LCD displays is affected by two main factors; the display technology and compression standards. Accurate knowledge about the characteristics of display and the video signal can be utilized to develop advanced...... on local LED-LCD backlight. Second, removing the digital video codec artifacts such as blocking and ringing artifacts by post-processing algorithms. A novel algorithm based on image features with optimal balance between visual quality and power consumption was developed. In addition, to remove flickering...... algorithms for signal (image or video) enhancement. One particular application of such algorithms is the case of LCDs with dynamic local backlight. The thesis addressed two main problems; first, designing algorithms that improve the visual quality of perceived image and video and reduce power consumption...

  6. Standardized access, display, and retrieval of medical video

    Science.gov (United States)

    Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.

    1999-05-01

    The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.

  7. Video integrated measurement system. [Diagnostic display devices

    Energy Technology Data Exchange (ETDEWEB)

    Spector, B.; Eilbert, L.; Finando, S.; Fukuda, F.

    1982-06-01

    A Video Integrated Measurement (VIM) System is described which incorporates the use of various noninvasive diagnostic procedures (moire contourography, electromyography, posturometry, infrared thermography, etc.), used individually or in combination, for the evaluation of neuromusculoskeletal and other disorders and their management with biofeedback and other therapeutic procedures. The system provides for measuring individual diagnostic and therapeutic modes, or multiple modes by split screen superimposition, of real time (actual) images of the patient and idealized (ideal-normal) models on a video monitor, along with analog and digital data, graphics, color, and other transduced symbolic information. It is concluded that this system provides an innovative and efficient method by which the therapist and patient can interact in biofeedback training/learning processes and holds considerable promise for more effective measurement and treatment of a wide variety of physical and behavioral disorders.

  8. Affordable multisensor digital video architecture for 360° situational awareness displays

    Science.gov (United States)

    Scheiner, Steven P.; Khan, Dina A.; Marecki, Alexander L.; Berman, David A.; Carberry, Dana

    2011-06-01

    One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time (i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that is capable of processing these large amounts of video and data from separate sensor systems and distributing it simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and software engineering efforts required to overcome these challenges and will address development of an affordable, integrated digital video architecture. The approaches evaluated will enable both current and future ground combat vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew working in closed-hatch systems during complex ground combat operations.

  9. Ethernet direct display: a new dimension for in-vehicle video connectivity solutions

    Science.gov (United States)

    Rowley, Vincent

    2009-05-01

    To improve the local situational awareness (LSA) of personnel in light or heavily armored vehicles, most military organizations recognize the need to equip their fleets with high-resolution digital video systems. Several related upgrade programs are already in progress and, almost invariably, COTS IP/Ethernet is specified as the underlying transport mechanism. The high bandwidths, long reach, networking flexibility, scalability, and affordability of IP/Ethernet make it an attractive choice. There are significant technical challenges, however, in achieving high-performance, real-time video connectivity over the IP/Ethernet platform. As an early pioneer in performance-oriented video systems based on IP/Ethernet, Pleora Technologies has developed core expertise in meeting these challenges and applied a singular focus to innovating within the required framework. The company's field-proven iPORTTM Video Connectivity Solution is deployed successfully in thousands of real-world applications for medical, military, and manufacturing operations. Pleora's latest innovation is eDisplayTM, a smallfootprint, low-power, highly efficient IP engine that acquires video from an Ethernet connection and sends it directly to a standard HDMI/DVI monitor for real-time viewing. More costly PCs are not required. This paper describes Pleora's eDisplay IP Engine in more detail. It demonstrates how - in concert with other elements of the end-to-end iPORT Video Connectivity Solution - the engine can be used to build standards-based, in-vehicle video systems that increase the safety and effectiveness of military personnel while fully leveraging the advantages of the lowcost COTS IP/Ethernet platform.

  10. Coupled auralization and virtual video for immersive multimedia displays

    Science.gov (United States)

    Henderson, Paul D.; Torres, Rendell R.; Shimizu, Yasushi; Radke, Richard; Lonsway, Brian

    2003-04-01

    The implementation of maximally-immersive interactive multimedia in exhibit spaces requires not only the presentation of realistic visual imagery but also the creation of a perceptually accurate aural experience. While conventional implementations treat audio and video problems as essentially independent, this research seeks to couple the visual sensory information with dynamic auralization in order to enhance perceptual accuracy. An implemented system has been developed for integrating accurate auralizations with virtual video techniques for both interactive presentation and multi-way communication. The current system utilizes a multi-channel loudspeaker array and real-time signal processing techniques for synthesizing the direct sound, early reflections, and reverberant field excited by a moving sound source whose path may be interactively defined in real-time or derived from coupled video tracking data. In this implementation, any virtual acoustic environment may be synthesized and presented in a perceptually-accurate fashion to many participants over a large listening and viewing area. Subject tests support the hypothesis that the cross-modal coupling of aural and visual displays significantly affects perceptual localization accuracy.

  11. High-definition video display based on the FPGA and THS8200

    Science.gov (United States)

    Qian, Jia; Sui, Xiubao

    2014-11-01

    This paper presents a high-definition video display solution based on the FPGA and THS8200. THS8200 is a video decoder chip launched by TI company, this chip has three 10-bit DAC channels which can capture video data in both 4:2:2 and 4:4:4 formats, and its data synchronization can be either through the dedicated synchronization signals HSYNC and VSYNC, or extracted from the embedded video stream synchronization information SAV / EAV code. In this paper, we will utilize the address and control signals generated by FPGA to access to the data-storage array, and then the FPGA generates the corresponding digital video signals YCbCr. These signals combined with the synchronization signals HSYNC and VSYNC that are also generated by the FPGA act as the input signals of THS8200. In order to meet the bandwidth requirements of the high-definition TV, we adopt video input in the 4:2:2 format over 2×10-bit interface. THS8200 is needed to be controlled by FPGA with I2C bus to set the internal registers, and as a result, it can generate the synchronous signal that is satisfied with the standard SMPTE and transfer the digital video signals YCbCr into analog video signals YPbPr. Hence, the composite analog output signals YPbPr are consist of image data signal and synchronous signal which are superimposed together inside the chip THS8200. The experimental research indicates that the method presented in this paper is a viable solution for high-definition video display, which conforms to the input requirements of the new high-definition display devices.

  12. Congenital malformations among children of women working with video display terminals

    DEFF Research Database (Denmark)

    Brandt, L P; Nielsen, C V

    1990-01-01

    In a case-base study among 214,108 commercial and clerical employees in Denmark the potential effect of the use of video display terminals on the risk of congenital malformations in pregnancy was investigated. The study base was identified by means of register linkage of the Medical Birth Register...... and the National Register of In-Patients. In the source population 24,352 pregnancy outcomes were registered, 661 of which with congenital malformations entered the case group, and a base sample of 2252 pregnancies was drawn. Data concerning the use of video display terminals, job stress, ergonomic factors......, exposure to organic solvents, and life-style factors were obtained from postal questionnaires. The results of this study did not support the hypothesis that the use of video display terminals during pregnancy is associated with an increased risk of congenital malformations....

  13. Image processing of integrated video image obtained with a charged-particle imaging video monitor system

    International Nuclear Information System (INIS)

    Iida, Takao; Nakajima, Takehiro

    1988-01-01

    A new type of charged-particle imaging video monitor system was constructed for video imaging of the distributions of alpha-emitting and low-energy beta-emitting nuclides. The system can display not only the scintillation image due to radiation on the video monitor but also the integrated video image becoming gradually clearer on another video monitor. The distortion of the image is about 5% and the spatial resolution is about 2 line pairs (lp)mm -1 . The integrated image is transferred to a personal computer and image processing is performed qualitatively and quantitatively. (author)

  14. Young Children's Analogical Problem Solving: Gaining Insights from Video Displays

    Science.gov (United States)

    Chen, Zhe; Siegler, Robert S.

    2013-01-01

    This study examined how toddlers gain insights from source video displays and use the insights to solve analogous problems. Two- to 2.5-year-olds viewed a source video illustrating a problem-solving strategy and then attempted to solve analogous problems. Older but not younger toddlers extracted the problem-solving strategy depicted in the video…

  15. Flat-panel video resolution LED display system

    Science.gov (United States)

    Wareberg, P. G.; Kennedy, D. I.

    The system consists of a 128 x 128 element X-Y addressable LED array fabricated from green-emitting gallium phosphide. The LED array is interfaced with a 128 x 128 matrix TV camera. Associated electronics provides for seven levels of grey scale above zero with a grey scale ratio of square root of 2. Picture elements are on 0.008 inch centers resulting in a resolution of 125 lines-per-inch and a display area of approximately 1 sq. in. The LED array concept lends itself to modular construction, permitting assembly of a flat panel screen of any desired size from 1 x 1 inch building blocks without loss of resolution. A wide range of prospective aerospace applications exist extending from helmet-mounted systems involving small dedicated arrays to multimode cockpit displays constructed as modular screens. High-resolution LED arrays are already used as CRT replacements in military film-marking reconnaissance applications.

  16. Human engineering guidelines for the evaluation and assessment of Video Display Units

    International Nuclear Information System (INIS)

    Gilmore, W.E.

    1985-07-01

    This report provides the Nuclear Regulatory Commission with a single source that documents known guidelines for conducting formal Human Factors evaluations of Video Display Units (VDUs). The handbook is a ''cookbook'' of acceptance guidelines for the reviewer faced with the task of evaluating VDUs already designed or planned for service in the control room. The areas addressed are video displays, controls, control/display integration, and workplace layout. Guidelines relevant to each of those areas are presented. The existence of supporting research is also indicated for each guideline. A Comment section and Method for Assessment section are provided for each set of guidelines

  17. The Eye Catching Property of Digital-Signage with Scent and a Scent-Emitting Video Display System

    Science.gov (United States)

    Tomono, Akira; Otake, Syunya

    In this paper, the effective method of inducing a glance aimed at the digital signage by emitting a scent is described. The simulation experiment was done using the immersive VR System because there were a lot of restrictions to the experiment in an actual passageway. In order to investigate the eye catching property of the digital signage, the passer-by's eye movement was analyzed. Through the experiment, they were clarified that the digital signage with the scent was paid to attention, and the strong impression remained in the memory. Next, a scent-emitting video display system applying to the digital signage is described. To this end, a scent-emitting device that is able to quickly change the scents it is releasing, and present them from a distance (by the non-contact method), thus maintaining a relationship between the scent and the image, must be developed. We propose a new method where a device that can release pressurized gases is placed behind the display screen filled with tiny pores. Scents are then ejected from this device, traveling through the pores to the front side of the screen. An excellent scent delivery characteristic was obtained because the distance to the user is close and the scent is presented from the front. We also present a method for inducing viewer reactions using on-screen images, thereby enabling scent release to coincide precisely with viewer inhalations. We anticipate that the simultaneous presentation of scents and video images will deepen viewers' comprehension of these images.

  18. Video display terminals - should operators be concerned

    International Nuclear Information System (INIS)

    Repacholi, M.H.

    1985-01-01

    Although modern offices have traditionally been thought to be among the safest places to work, over the past few years office workers have become concerned that video display terminals could be causing a variety of health problems. Extensive testing has occurred in many countries to determine if VDTs emit hazardous levels of ionizing or non-ionizing radiation. Results of these surveys suggest that radiation emissions are not of concern but that ergonomic factors in the office environment may need to be improved

  19. Investigation of radiation emissions from video display terminals

    International Nuclear Information System (INIS)

    Zuk, W.M.; Stuchly, M.A.; Dvorak, P.; Deslauriers, Y.

    1983-01-01

    This report presents and discusses the results of radiation emission measurements carried out on video display terminals (VDTs) by the Radiation Protection Bureau. While the report is not intended to be an exhaustive review of all of the world literature on the subject, the more important studies performed on VDTs are summarized and reviewed. Attention is drawn to recent information which has not yet become generally available

  20. Modeling the Color Image and Video Quality on Liquid Crystal Displays with Backlight Dimming

    DEFF Research Database (Denmark)

    Korhonen, Jari; Mantel, Claire; Burini, Nino

    2013-01-01

    Objective image and video quality metrics focus mostly on the digital representation of the signal. However, the display characteristics are also essential for the overall Quality of Experience (QoE). In this paper, we use a model of a backlight dimming system for Liquid Crystal Display (LCD......) and show how the modeled image can be used as an input to quality assessment algorithms. For quality assessment, we propose an image quality metric, based on Peak Signal-to-Noise Ratio (PSNR) computation in the CIE L*a*b* color space. The metric takes luminance reduction, color distortion and loss...

  1. An integrated port camera and display system for laparoscopy.

    Science.gov (United States)

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  2. Modular integrated video system (MIVS) review station

    International Nuclear Information System (INIS)

    Garcia, M.L.

    1988-01-01

    An unattended video surveillance unit, the Modular Integrated Video System (MIVS), has been developed by Sandia National Laboratories for International Safeguards use. An important support element of this system is a semi-automatic Review Station. Four component modules, including an 8 mm video tape recorder, a 4-inch video monitor, a power supply and control electronics utilizing a liquid crystal display (LCD) are mounted in a suitcase for probability. The unit communicates through the interactive, menu-driven LCD and may be operated on facility power through the world. During surveillance, the MIVS records video information at specified time intervals, while also inserting consecutive scene numbers and tamper event information. Using either of two available modes of operation, the Review Station reads the inserted information and counts the number of missed scenes and/or tamper events encountered on the tapes, and reports this to the user on the LCD. At the end of a review session, the system will summarize the results of the review, stop the recorder, and advise the user of the completion of the review. In addition, the Review Station will check for any video loss on the tape

  3. System design description for the LDUA common video end effector system (CVEE)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The Common Video End Effector System (CVEE), system 62-60, was designed by the Idaho National Engineering Laboratory (INEL) to provide the control interface of the various video end effectors used on the LDUA. The CVEE system consists of a Support Chassis which contains the input and output Opto-22 modules, relays, and power supplies and the Power Chassis which contains the bipolar supply and other power supplies. The combination of the Support Chassis and the Power Chassis make up the CVEE system. The CVEE system is rack mounted in the At Tank Instrument Enclosure (ATIE). Once connected it is controlled using the LDUA supervisory data acquisition system (SDAS). Video and control status will be displayed on monitors within the LDUA control center

  4. Toward 3D-IPTV: design and implementation of a stereoscopic and multiple-perspective video streaming system

    Science.gov (United States)

    Petrovic, Goran; Farin, Dirk; de With, Peter H. N.

    2008-02-01

    3D-Video systems allow a user to perceive depth in the viewed scene and to display the scene from arbitrary viewpoints interactively and on-demand. This paper presents a prototype implementation of a 3D-video streaming system using an IP network. The architecture of our streaming system is layered, where each information layer conveys a single coded video signal or coded scene-description data. We demonstrate the benefits of a layered architecture with two examples: (a) stereoscopic video streaming, (b) monoscopic video streaming with remote multiple-perspective rendering. Our implementation experiments confirm that prototyping 3D-video streaming systems is possible with today's software and hardware. Furthermore, our current operational prototype demonstrates that highly heterogeneous clients can coexist in the system, ranging from auto-stereoscopic 3D displays to resource-constrained mobile devices.

  5. Does a video displaying a stair climbing model increase stair use in a worksite setting?

    Science.gov (United States)

    Van Calster, L; Van Hoecke, A-S; Octaef, A; Boen, F

    2017-08-01

    This study evaluated the effects of improving the visibility of the stairwell and of displaying a video with a stair climbing model on climbing and descending stair use in a worksite setting. Intervention study. Three consecutive one-week intervention phases were implemented: (1) the visibility of the stairs was improved by the attachment of pictograms that indicated the stairwell; (2) a video showing a stair climbing model was sent to the employees by email; and (3) the same video was displayed on a television screen at the point-of-choice (POC) between the stairs and the elevator. The interventions took place in two buildings. The implementation of the interventions varied between these buildings and the sequence was reversed. Improving the visibility of the stairs increased both stair climbing (+6%) and descending stair use (+7%) compared with baseline. Sending the video by email yielded no additional effect on stair use. By contrast, displaying the video at the POC increased stair climbing in both buildings by 12.5% on average. One week after the intervention, the positive effects on stair climbing remained in one of the buildings, but not in the other. These findings suggest that improving the visibility of the stairwell and displaying a stair climbing model on a screen at the POC can result in a short-term increase in both climbing and descending stair use. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  6. Display Sharing: An Alternative Paradigm

    Science.gov (United States)

    Brown, Michael A.

    2010-01-01

    The current Johnson Space Center (JSC) Mission Control Center (MCC) Video Transport System (VTS) provides flight controllers and management the ability to meld raw video from various sources with telemetry to improve situational awareness. However, maintaining a separate infrastructure for video delivery and integration of video content with data adds significant complexity and cost to the system. When considering alternative architectures for a VTS, the current system's ability to share specific computer displays in their entirety to other locations, such as large projector systems, flight control rooms, and back supporting rooms throughout the facilities and centers must be incorporated into any new architecture. Internet Protocol (IP)-based systems also support video delivery and integration. IP-based systems generally have an advantage in terms of cost and maintainability. Although IP-based systems are versatile, the task of sharing a computer display from one workstation to another can be time consuming for an end-user and inconvenient to administer at a system level. The objective of this paper is to present a prototype display sharing enterprise solution. Display sharing is a system which delivers image sharing across the LAN while simultaneously managing bandwidth, supporting encryption, enabling recovery and resynchronization following a loss of signal, and, minimizing latency. Additional critical elements will include image scaling support, multi -sharing, ease of initial integration and configuration, integration with desktop window managers, collaboration tools, host and recipient controls. This goal of this paper is to summarize the various elements of an IP-based display sharing system that can be used in today's control center environment.

  7. X-radiation from television receivers and video display terminals

    International Nuclear Information System (INIS)

    Huang, Ching-Chung; Lin, Pei-Huo; Lin, Yu-Ming; Weng, Pao-Shan.

    1986-01-01

    This paper deals with the X-radiation from television receivers and video display terminals. The bremsstrahlung production rate was calculated according to the thick target theory, and the transmitted X-radiation was measured by the spectrometry method. The calculated and the measured results were compared and discussed. In addition, evidences were shown that only the highest energy component of the bremsstrahlung can penetrate the cathode ray tube. (author)

  8. LMDS Lightweight Modular Display System.

    Science.gov (United States)

    1982-02-16

    based on standard functions. This means that the cost to produce a particular display function can be met in the most economical fashion and at the same...not mean that the NTDS interface would be eliminated. What is anticipated is the use of ETHERNET at a low level of system interface, ie internal to...GENERATOR dSYMBOL GEN eCOMMUNICATION 3-2 The architecture of the unit’s (fig 3-4) input circuitry is based on a video table look-up ROM. The function

  9. A generic flexible and robust approach for intelligent real-time video-surveillance systems

    Science.gov (United States)

    Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit

    2004-05-01

    In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.

  10. The modular integrated video system (MIVS)

    International Nuclear Information System (INIS)

    Schneider, S.L.; Sonnier, C.S.

    1987-01-01

    The Modular Integrated Video System (MIVS) is being developed for the International Atomic Energy Agency (IAEA) for use in facilities where mains power is available and the separation of the Camera and Recording Control Unit is desirable. The system is being developed under the US Program for Technical Assistance to the IAEA Safeguards (POTAS). The MIVS is designed to be a user-friendly system, allowing operation with minimal effort and training. The system software, through the use of a Liquid Crystal Display (LCD) and four soft keys, leads the inspector through the setup procedures to accomplish the intended surveillance or maintenance task. Review of surveillance data is accomplished with the use of a Portable Review Station. This Review Station will aid the inspector in the review process and determine the number of missed video scenes during a surveillance period

  11. CANDU 9 operator plant display system

    International Nuclear Information System (INIS)

    Trueman, R.; Webster, A.; MacBeth, M.J.

    1997-01-01

    To meet evolving client and regulatory needs, AECL has adopted an evolutionary approach to the design of the CANDU 9 control centre. That is, the design incorporates feedback from existing stations, reflects the growing diversity in the roles and responsibilities of the operating staff, and reduces costs associated with plant capital and operations, maintenance and administration (OM and A), through the appropriate introduction of new technologies. Underlying this approach is a refined engineering design process that cost-effectively integrates operational feedback and human factors engineering to define the operating staff information and information presentation requirements. Based on this approach, the CANDU 9 control centre will provide utility operating staff with the means to achieve improved operations and reduced OM and A costs. One of the design features that will contribute to the improved operational capabilities of the control centre is a new Plant Display System (PDS) that is separate from the digital control system. The PDS will be used to implement non-safety panel, and console video display systems within the CANDU 9 main control room (MCR). This paper presents a detailed description of the CANDU 9 Plant Display System and features that provide increased operational capabilities. (author)

  12. Dissecting children's observational learning of complex actions through selective video displays.

    Science.gov (United States)

    Flynn, Emma; Whiten, Andrew

    2013-10-01

    Children can learn how to use complex objects by watching others, yet the relative importance of different elements they may observe, such as the interactions of the individual parts of the apparatus, a model's movements, and desirable outcomes, remains unclear. In total, 140 3-year-olds and 140 5-year-olds participated in a study where they observed a video showing tools being used to extract a reward item from a complex puzzle box. Conditions varied according to the elements that could be seen in the video: (a) the whole display, including the model's hands, the tools, and the box; (b) the tools and the box but not the model's hands; (c) the model's hands and the tools but not the box; (d) only the end state with the box opened; and (e) no demonstration. Children's later attempts at the task were coded to establish whether they imitated the hierarchically organized sequence of the model's actions, the action details, and/or the outcome. Children's successful retrieval of the reward from the box and the replication of hierarchical sequence information were reduced in all but the whole display condition. Only once children had attempted the task and witnessed a second demonstration did the display focused on the tools and box prove to be better for hierarchical sequence information than the display focused on the tools and hands only. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. View Synthesis for Advanced 3D Video Systems

    Directory of Open Access Journals (Sweden)

    2009-02-01

    Full Text Available Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, high-quality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo- and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.

  14. View Synthesis for Advanced 3D Video Systems

    Directory of Open Access Journals (Sweden)

    Müller Karsten

    2008-01-01

    Full Text Available Abstract Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, high-quality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo- and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.

  15. Digitized video subject positioning and surveillance system for PET

    International Nuclear Information System (INIS)

    Picard, Y.; Thompson, C.J.

    1995-01-01

    Head motion is a significant contribution to the degradation of image quality of Positron Emission Tomography (PET) studies. Images from different studies must also be realigned digitally to be correlated when the subject position has changed. These constraints could be eliminated if the subject's head position could be monitored accurately. The authors have developed a video camera-based surveillance system to monitor the head position and motion of subjects undergoing PET studies. The system consists of two CCD (charge-coupled device) cameras placed orthogonally such that both face and profile views of the subject's head are displayed side by side on an RGB video monitor. Digitized images overlay the live images in contrasting colors on the monitor. Such a system can be used to (1) position the subject in the field of view (FOV) by displaying the position of the scanner's slices on the monitor along with the current subject position, (2) monitor head motion and alert the operator of any motion during the study and (3) reposition the subject accurately for subsequent studies by displaying the previous position along with the current position in a contrasting color

  16. On-line video image processing system for real-time neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Fujine, S; Yoneda, K; Kanda, K [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.

    1983-09-15

    The neutron radiography system installed at the E-2 experimental hole of the KUR (Kyoto University Reactor) has been used for some NDT applications in the nuclear field. The on-line video image processing system of this facility is introduced in this paper. A 0.5 mm resolution in images was obtained by using a super high quality TV camera developed for X-radiography viewing a NE-426 neutron-sensitive scintillator. The image of the NE-426 on a CRT can be observed directly and visually, thus many test samples can be sequentially observed when necessary for industrial purposes. The video image signals from the TV camera are digitized, with a 33 ms delay, through a video A/D converter (ADC) and can be stored in the image buffer (32 KB DRAM) of a microcomputer (Z-80) system. The digitized pictures are taken with 16 levels of gray scale and resolved to 240 x 256 picture elements (pixels) on a monochrome CRT, with the capability also to display 16 distinct colors on a RGB video display. The direct image of this system could be satisfactory for penetrating the side plates to test MTR type reactor fuels and for the investigation of moving objects.

  17. Markerless Augmented Reality via Stereo Video See-Through Head-Mounted Display Device

    Directory of Open Access Journals (Sweden)

    Chung-Hung Hsieh

    2015-01-01

    Full Text Available Conventionally, the camera localization for augmented reality (AR relies on detecting a known pattern within the captured images. In this study, a markerless AR scheme has been designed based on a Stereo Video See-Through Head-Mounted Display (HMD device. The proposed markerless AR scheme can be utilized for medical applications such as training, telementoring, or preoperative explanation. Firstly, a virtual model for AR visualization is aligned to the target in physical space by an improved Iterative Closest Point (ICP based surface registration algorithm, with the target surface structure reconstructed by a stereo camera pair; then, a markerless AR camera localization method is designed based on the Kanade-Lucas-Tomasi (KLT feature tracking algorithm and the Random Sample Consensus (RANSAC correction algorithm. Our AR camera localization method is shown to be better than the traditional marker-based and sensor-based AR environment. The demonstration system was evaluated with a plastic dummy head and the display result is satisfactory for a multiple-view observation.

  18. Modeling the Quality of Videos Displayed With Local Dimming Backlight at Different Peak White and Ambient Light Levels

    DEFF Research Database (Denmark)

    Mantel, Claire; Søgaard, Jacob; Bech, Søren

    2016-01-01

    is computed using a model of the display. Widely used objective quality metrics are applied based on the rendering models of the videos to predict the subjective evaluations. As these predictions are not satisfying, three machine learning methods are applied: partial least square regression, elastic net......This paper investigates the impact of ambient light and peak white (maximum brightness of a display) on the perceived quality of videos displayed using local backlight dimming. Two subjective tests providing quality evaluations are presented and analyzed. The analyses of variance show significant...

  19. Head-worn display-based augmented reality system for manufacturing

    Science.gov (United States)

    Sarwal, Alok; Baker, Chris; Filipovic, Dragan

    2005-05-01

    This system provides real-time guidance for training and problem-solving on production-line machinery. A prototype of a wearable, real-time, video guidance, interactive system for use in manufacturing, has been developed and demonstrated. Anticipated benefits are: relatively inexperienced personnel can provide machine servicing and the dependency on the vendor to repair or maintain equipment is significantly reduced. Additionally, servicing, training or part change-over schedules can be exercised more predictably and with less training. This approach utilizes Head Worn Display or Head Mounted Display (HMD) technology that can be readily adapted for various machines on the factory floor with training steps for a new location. Such a system can support various applications in manufacturing such as direct video guiding or applying scheduled maintenance and training to effectively resolve servicing emergencies and reduce machine downtime. It can also provide training of inexperienced operators and maintenance personnel. The gap between production line complexity and ability of production personnel to effectively maintain equipment is expected to widen in the future and advanced equipment will require complex servicing procedures that are neither well documented nor user-friendly. This system offers benefits in increased manufacturing equipment availability by facilitating effective servicing and training and can interface to a server system for additional computational resources on an as-needed basis. This system utilizes markers to guide the user and enforces a well defined sequence of operations. It performs augmentation of information on the display in order to provide guidance in real-time.

  20. Development of P4140 video data wall projector; Video data wall projector

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, H.; Inoue, H. [Toshiba Corp., Tokyo (Japan)

    1998-12-01

    The P4140 is a 3 cathode-ray tube (CRT) video data wall projector for super video graphics array (SVGA) signals. It is used as an image display unit, providing a large screen when several sets are put together. A high-quality picture has been realized by higher resolution and improved color uniformity technology. A new convergence adjustment system has also been developed through the optimal combination of digital and analog technologies. This video data wall installation has been greatly enhanced by the automation of cubes and cube performance settings. The P4140 video data wall projector can be used for displaying not only data but video as well. (author)

  1. Video Comparator

    International Nuclear Information System (INIS)

    Rose, R.P.

    1978-01-01

    The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display

  2. A comparison of head-mounted and hand-held displays for 360° videos with focus on attitude and behavior change

    DEFF Research Database (Denmark)

    Fonseca, Diana; Kraus, Martin

    2016-01-01

    The present study is designed to test how immersion, presence, and narrative content (with a focus on emotional immersion) can affect one's pro-environmental attitude and behavior with specific interest in 360° videos and meat consumption as a non pro-environmental behavior. This research describes...... a between-group design experiment that compares two systems with different levels of immersion and two types of narratives, one with and one without emotional content. In the immersive video (IV) condition (high immersion), 21 participants used a Head-Mounted Display (HMD) to watch an emotional 360° video...... about meat consumption and its effects on the environment; another 21 participants experienced the tablet condition (low immersion) where they viewed the same video but with a 10.1 inch tablet; 22 participants in the control condition viewed a non emotional video about submarines with an HMD...

  3. Design and hardware alternatives for a Safety-Parameter Display System

    International Nuclear Information System (INIS)

    Honeycutt, F.; Merten, W.T.; Roy, G.M.; Segraves, E.; Stone, G.P.

    1981-05-01

    The SPDS is a dedicated control room operator aid and is viewed as an important safety improvement within the context of other post-TMI fixes. Hardware configurations and components to implement the NSAC display format of a Safety Parameter Display System (SPDS) are evaluated. The evaluation was made on the basis of five alternative hardware configurations which use commercially available components. Four of the alternatives use computer/video display architecture. The fifth alternative is a simple hardwired system which uses strip chart recorders. SPDS regulatory requirements are defined by NUREG 0696. Overall feasibility of the NSAC concept was evaluated in terms of performance, reliability, cost, licensability, and flexibility. The flexibility evaluation relates to the ability to handle other display formats, the data acquisition needs of the other emergency facilities and the impact of expected future NRC requirements

  4. Staff acceptance of video monitoring for coordination: a video system to support perioperative situation awareness.

    Science.gov (United States)

    Kim, Young Ju; Xiao, Yan; Hu, Peter; Dutton, Richard

    2009-08-01

    To understand staff acceptance of a remote video monitoring system for operating room (OR) coordination. Improved real-time remote visual access to OR may enhance situational awareness but also raises privacy concerns for patients and staff. Survey. A system was implemented in a six-room surgical suite to display OR monitoring video at an access restricted control desk area. Image quality was manipulated to improve staff acceptance. Two months after installation, interviews and a survey were conducted on staff acceptance of video monitoring. About half of all OR personnel responded (n = 63). Overall levels of concerns were low, with 53% rated no concerns and 42% little concern. Top two reported uses of the video were to see if cases are finished and to see if a room is ready. Viewing the video monitoring system as useful did not reduce levels of concern. Staff in supervisory positions perceived less concern about the system's impact on privacy than did those supervised (p < 0.03). Concerns for patient privacy correlated with concerns for staff privacy and performance monitoring. Technical means such as manipulating image quality helped staff acceptance. Manipulation of image quality resulted overall acceptance of monitoring video, with residual levels of concerns. OR nurses may express staff privacy concern in the form of concerns over patient privacy. This study provided suggestions for technological and implementation strategies of video monitoring for coordination use in OR. Deployment of communication technology and integration of clinical information will likely raise concerns over staff privacy and performance monitoring. The potential gain of increased information access may be offset by negative impact of a sense of loss of autonomy.

  5. FPGA-Based Real-Time Motion Detection for Automated Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2016-03-01

    Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.

  6. A head-mounted display system for augmented reality: Initial evaluation for interventional MRI

    International Nuclear Information System (INIS)

    Wendt, M.; Wacker, F.K.

    2003-01-01

    Purpose: To discuss the technical details of a head mounted display with an augmented reality (AR) system and to describe a first pre-clinical evaluation in interventional MRI. Method: The AR system consists of a video-see-through head mounted display (HMD), mounted with a mini video camera for tracking and a stereo pair of mini cameras that capture live images of the scene. The live video view of the phantom/patient is augmented with graphical representations of anatomical structures from MRI image data and is displayed on the HMD. The application of the AR system with interventional MRI was tested using a MRI data set of the head and a head phantom. Results: The HMD enables the user to move around and observe the scene dynamically from various viewpoints. Within a short time the natural hand-eye coordination can easily be adapted to the slightly different view. The 3D perception is based on stereo and kinetic depth cues. A circular target with a diameter of 0.5 square centimeter was hit in 19 of 20 attempts. In a first evaluation the MRI image data augmented reality scene of a head phantom allowed good planning and precise simulation of a puncture. Conclusion: The HMD in combination with AR provides a direct, intuitive guidance for interventional MR procedures. (orig.) [de

  7. Irradiation from video display terminals

    International Nuclear Information System (INIS)

    Backe, S.; Hannevik, M.

    1987-01-01

    Video display terminals (VDT's) are in common use by computer operators. In the last years this group of workers has expressed growing concern about their work environment and possible hazardious effects in connection with radiation emission from VDT's. Radiation types and levels of emission and possible biological effects have been the subject of research activity in Norway and in other countries. This report summarizes the various radiation types and their levels of emission from VDT's. An overview of recent epidemiological studies and animal experiments, and the conclusions given by the research groups are also presented. The conclusions drawn in this report based on the current knowledge are: Radiation, other than low frequency pulsed magnetic fields, have low and negligible emission levels and will not represent any health hazard to VDT-operator or to the foetus of pregnant operators. The biological effects of low frequency pulsed mangetic fields have been the subject of epidemiological studies and animal experiments. Epidemiological studies carried out in Canada, Finland, Sweden and Norway gave no support for any correlation between pregnancy complications and operation of VDT's. From animal experiments it has so far been impossible to assert an effect on pregnancy outcome from low frequency pulsed magnetic fields

  8. Telemetry and Communication IP Video Player

    Science.gov (United States)

    OFarrell, Zachary L.

    2011-01-01

    Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.

  9. [Influence of different lighting levels at workstations with video display terminals on operators' work efficiency].

    Science.gov (United States)

    Janosik, Elzbieta; Grzesik, Jan

    2003-01-01

    The aim of this work was to evaluate the influence of different lighting levels at workstations with video display terminals (VDTs) on the course of the operators' visual work, and to determine the optimal levels of lighting at VDT workstations. For two kinds of job (entry of figures from a typescript and edition of the text displayed on the screen), the work capacity, the degree of the visual strain and the operators' subjective symptoms were determined for four lighting levels (200, 300, 500 and 750 lx). It was found that the work at VDT workstations may overload the visual system and cause eyes complaints as well as the reduction of accommodation or convergence strength. It was also noted that the edition of the text displayed on the screen is more burdening for operators than the entry of figures from a typescript. Moreover, the examination results showed that the lighting at VDT workstations should be higher than 200 lx and that 300 lx makes the work conditions most comfortable during the entry of figures from a typescript, and 500 lx during the edition of the text displayed on the screen.

  10. Objective analysis of image quality of video image capture systems

    Science.gov (United States)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  11. Video Retrieval Berdasarkan Teks dan Gambar

    Directory of Open Access Journals (Sweden)

    Rahmi Hidayati

    2013-01-01

    Abstract Retrieval video has been used to search a video based on the query entered by user which were text and image. This system could increase the searching ability on video browsing and expected to reduce the video’s retrieval time. The research purposes were designing and creating a software application of retrieval video based on the text and image on the video. The index process for the text is tokenizing, filtering (stopword, stemming. The results of stemming to saved in the text index table. Index process for the image is to create an image color histogram and compute the mean and standard deviation at each primary color red, green and blue (RGB of each image. The results of feature extraction is stored in the image table The process of video retrieval using the query text, images or both. To text query system to process the text query by looking at the text index tables. If there is a text query on the index table system will display information of the video according to the text query. To image query system to process the image query by finding the value of the feature extraction means red, green means, means blue, red standard deviation, standard deviation and standard deviation of blue green. If the value of the six features extracted query image on the index table image will display the video information system according to the query image. To query text and query images, the system will display the video information if the query text and query images have a relationship that is query text and query image has the same film title.   Keywords—  video, index, retrieval, text, image

  12. Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education

    Science.gov (United States)

    Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin

    2006-02-01

    Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic

  13. Modeling the Subjective Quality of Highly Contrasted Videos Displayed on LCD With Local Backlight Dimming

    DEFF Research Database (Denmark)

    Mantel, Claire; Bech, Søren; Korhonen, Jari

    2015-01-01

    Local backlight dimming is a technology aiming at both saving energy and improving visual quality on television sets. As the rendition of the image is specified locally, the numerical signal corresponding to the displayed image needs to be computed through a model of the display. This simulated...... signal can then be used as input to objective quality metrics. The focus of this paper is on determining which characteristics of locally backlit displays influence quality assessment. A subjective experiment assessing the quality of highly contrasted videos displayed with various local backlight......-dimming algorithms is set up. Subjective results are then compared with both objective measures and objective quality metrics using different display models. The first analysis indicates that the most significant objective features are temporal variations, power consumption (probably representing leakage...

  14. A head-mounted display-based personal integrated-image monitoring system for transurethral resection of the prostate.

    Science.gov (United States)

    Yoshida, Soichiro; Kihara, Kazunori; Takeshita, Hideki; Fujii, Yasuhisa

    2014-12-01

    The head-mounted display (HMD) is a new image monitoring system. We developed the Personal Integrated-image Monitoring System (PIM System) using the HMD (HMZ-T2, Sony Corporation, Tokyo, Japan) in combination with video splitters and multiplexers as a surgical guide system for transurethral resection of the prostate (TURP). The imaging information obtained from the cystoscope, the transurethral ultrasonography (TRUS), the video camera attached to the HMD, and the patient's vital signs monitor were split and integrated by the PIM System and a composite image was displayed by the HMD using a four-split screen technique. Wearing the HMD, the lead surgeon and the assistant could simultaneously and continuously monitor the same information displayed by the HMD in an ergonomically efficient posture. Each participant could independently rearrange the images comprising the composite image depending on the engaging step. Two benign prostatic hyperplasia (BPH) patients underwent TURP performed by surgeons guided with this system. In both cases, the TURP procedure was successfully performed, and their postoperative clinical courses had no remarkable unfavorable events. During the procedure, none of the participants experienced any HMD-wear related adverse effects or reported any discomfort.

  15. Video profile monitor diagnostic system for GTA

    International Nuclear Information System (INIS)

    Sandoval, D.P.; Garcia, R.C.; Gilpatrick, J.D.; Johnson, K.F.; Shinas, M.A.; Wright, R.; Yuan, V.; Zander, M.E.

    1992-01-01

    This paper describes a video diagnostic system used to measure the beam profile and position of the Ground Test Accelerator 2.5-MeV H - ion beam as it exits the intermediate matching section. Inelastic collisions between H-ions and residual nitrogen to fluoresce. The resulting light is captured through transport optics by an intensified CCD camera and is digitized. Real-time beam-profile images are displayed and stored for detailed analysis. Analyzed data showing resolutions for both position and profile measurements will also be presented

  16. High-Definition 3D Stereoscopic Microscope Display System for Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Yoo Kwan-Hee

    2010-01-01

    Full Text Available Biomedical research has been performed by using advanced information techniques, and micro-high-quality stereo images have been used by researchers and/or doctors for various aims in biomedical research and surgery. To visualize the stereo images, many related devices have been developed. However, the devices are difficult to learn for junior doctors and demanding to supervise for experienced surgeons. In this paper, we describe the development of a high-definition (HD three-dimensional (3D stereoscopic imaging display system for operating a microscope or experimenting on animals. The system consists of a stereoscopic camera part, image processing device for stereoscopic video recording, and stereoscopic display. In order to reduce eyestrain and viewer fatigue, we use a preexisting stereo microscope structure and polarized-light stereoscopic display method that does not reduce the quality of the stereo images. The developed system can overcome the discomfort of the eye piece and eyestrain caused by use over a long period of time.

  17. 3D display system using monocular multiview displays

    Science.gov (United States)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2002-05-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have researched the virtual-reality systems connected with computer networks for real-time remote control and developed a low-priced real-time 3D display for building these systems. We developed a 3D HMD system using monocular multi-view displays. The 3D displaying technique of this monocular multi-view display is based on the concept of the super multi-view proposed by Kajiki at TAO (Telecommunications Advancement Organization of Japan) in 1996. Our 3D HMD has two monocular multi-view displays (used as a visual display unit) in order to display a picture to the left eye and the right eye. The left and right images are a pair of stereoscopic images for the left and right eyes, then stereoscopic 3D images are observed.

  18. Millisecond accuracy video display using OpenGL under Linux.

    Science.gov (United States)

    Stewart, Neil

    2006-02-01

    To measure people's reaction times to the nearest millisecond, it is necessary to know exactly when a stimulus is displayed. This article describes how to display stimuli with millisecond accuracy on a normal CRT monitor, using a PC running Linux. A simple C program is presented to illustrate how this may be done within X Windows using the OpenGL rendering system. A test of this system is reported that demonstrates that stimuli may be consistently displayed with millisecond accuracy. An algorithm is presented that allows the exact time of stimulus presentation to be deduced, even if there are relatively large errors in measuring the display time.

  19. A faster technique for rendering meshes in multiple display systems

    Science.gov (United States)

    Hand, Randall E.; Moorhead, Robert J., II

    2003-05-01

    Level of detail algorithms have widely been implemented in architectural VR walkthroughs and video games, but have not had widespread use in VR terrain visualization systems. This thesis explains a set of optimizations to allow most current level of detail algorithms run in the types of multiple display systems used in VR. It improves both the visual quality of the system through use of graphics hardware acceleration, and improves the framerate and running time through moifications to the computaitons that drive the algorithms. Using ROAM as a testbed, results show improvements between 10% and 100% on varying machines.

  20. Video Games: A Human Factors Guide to Visual Display Design and Instructional System Design

    Science.gov (United States)

    1984-04-01

    Abstract K Electronic video games have many of the same technological and psychological characteristics that are found in military computer-based...employed as learning vehicles, the especially compelling characteristics of electronic video games have not been fully explored for possible exploitation...new electronic video games . !? Accordingly, the following experiment was designed to determine those m dimensions along which electronic

  1. Video profile monitor diagnostic system for GTA

    International Nuclear Information System (INIS)

    Sandovil, D.P.; Garcia, R.C.; Gilpatrick, J.D.; Johnson, K.F.; Shinas, M.A.; Wright, R.; Yuan, V.; Zander, M.E.

    1992-01-01

    This paper describes a video diagnostic system used to measure the beam profile and position of the Ground Test Accelerator 2.5 MeV H - ion beam as it exits the intermediate matching section. Inelastic collisions between H - ions and residual nitrogen in the vacuum chamber cause the nitrogen to fluoresce. The resulting light is captured through transport optics by an intensified CCD camera and is digitized. Real-time beam profile images are displayed and stored for detailed analysis. Analyzed data showing resolutions for both position and profile measurements will also be presented. (Author) 5 refs., 7 figs

  2. Video Bandwidth Compression System.

    Science.gov (United States)

    1980-08-01

    scaling function, located between the inverse DPCM and inverse transform , on the decoder matrix multiplier chips. 1"V1 T.. ---- i.13 SECURITY...Bit Unpacker and Inverse DPCM Slave Sync Board 15 e. Inverse DPCM Loop Boards 15 f. Inverse Transform Board 16 g. Composite Video Output Board 16...36 a. Display Refresh Memory 36 (1) Memory Section 37 (2) Timing and Control 39 b. Bit Unpacker and Inverse DPCM 40 c. Inverse Transform Processor 43

  3. Head-motion-controlled video goggles: preliminary concept for an interactive laparoscopic image display (i-LID).

    Science.gov (United States)

    Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I

    2009-08-01

    Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD

  4. Large-video-display-format conversion

    NARCIS (Netherlands)

    Haan, de G.

    2000-01-01

    High-quality video-format converters apply motion estimation and motion compensation to prevent jitter resulting from picture-rate conversion, and aliasing due to de-interlacing, in sequences with motion. Although initially considered as too expensive, high-quality conversion is now economically

  5. Subjective quality of videos displayed with local backlight dimming at different peak white and ambient light levels

    DEFF Research Database (Denmark)

    Mantel, Claire; Korhonen, Jari; Forchhammer, Søren

    2015-01-01

    In this paper the influence of ambient light and peak white (maximum brightness) of a display on the subjective quality of videos shown with local backlight dimming is examined. A subjective experiment investigating those factors is set-up using high contrast test sequences. The results are firstly...

  6. Competitive action video game players display rightward error bias during on-line video game play.

    Science.gov (United States)

    Roebuck, Andrew J; Dubnyk, Aurora J B; Cochran, David; Mandryk, Regan L; Howland, John G; Harms, Victoria

    2017-09-12

    Research in asymmetrical visuospatial attention has identified a leftward bias in the general population across a variety of measures including visual attention and line-bisection tasks. In addition, increases in rightward collisions, or bumping, during visuospatial navigation tasks have been demonstrated in real world and virtual environments. However, little research has investigated these biases beyond the laboratory. The present study uses a semi-naturalistic approach and the online video game streaming service Twitch to examine navigational errors and assaults as skilled action video game players (n = 60) compete in Counter Strike: Global Offensive. This study showed a significant rightward bias in both fatal assaults and navigational errors. Analysis using the in-game ranking system as a measure of skill failed to show a relationship between bias and skill. These results suggest that a leftward visuospatial bias may exist in skilled players during online video game play. However, the present study was unable to account for some factors such as environmental symmetry and player handedness. In conclusion, video game streaming is a promising method for behavioural research in the future, however further study is required before one can determine whether these results are an artefact of the method applied, or representative of a genuine rightward bias.

  7. Unattended video surveillance systems for international safeguards

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1979-01-01

    The use of unattended video surveillance systems places some unique requirements on the systems and their hardware. The systems have the traditional requirements of video imaging, video storage, and video playback but also have some special requirements such as tamper safing. The technology available to meet these requirements and how it is being applied to unattended video surveillance systems are discussed in this paper

  8. Video systems for alarm assessment

    International Nuclear Information System (INIS)

    Greenwoll, D.A.; Matter, J.C.; Ebel, P.E.

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs

  9. Video systems for alarm assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.

  10. A passive cooling system proposal for multifunction and high-power displays

    Science.gov (United States)

    Tari, Ilker

    2013-03-01

    Flat panel displays are conventionally cooled by internal natural convection, which constrains the possible rate of heat transfer from the panel. On one hand, during the last few years, the power consumption and the related cooling requirement for 1080p displays have decreased mostly due to energy savings by the switch to LED backlighting and more efficient electronics. However, on the other hand, the required cooling rate recently started to increase with new directions in the industry such as 3D displays, and ultra-high-resolution displays (recent 4K announcements and planned introduction of 8K). In addition to these trends in display technology itself, there is also a trend to integrate consumer entertainment products into displays with the ultimate goal of designing a multifunction device replacing the TV, the media player, the PC, the game console and the sound system. Considering the increasing power requirement for higher fidelity in video processing, these multifunction devices tend to generate very high heat fluxes, which are impossible to dissipate with internal natural convection. In order to overcome this obstacle, instead of active cooling with forced convection that comes with drawbacks of noise, additional power consumption, and reduced reliability, a passive cooling system relying on external natural convection and radiation is proposed here. The proposed cooling system consists of a heat spreader flat heat pipe and aluminum plate-finned heat sink with anodized surfaces. For this system, the possible maximum heat dissipation rates from the standard size panels (in 26-70 inch range) are estimated by using our recently obtained heat transfer correlations for the natural convection from aluminum plate-finned heat sinks together with the surface-to-surface radiation. With the use of the proposed passive cooling system, the possibility of dissipating very high heat rates is demonstrated, hinting a promising green alternative to active cooling.

  11. GPLS VME Module: A Diagnostic and Display Tool for NSLS Micro Systems

    International Nuclear Information System (INIS)

    Ramamoorthy, S.; Smith, J. D.

    1999-01-01

    The General Purpose Light Source VME module is an integral part of every front-end micro in the NSLS control system. The board incorporates features such as a video character generator, clock signals, time-of-day clock, a VME bus interrupter and general-purpose digital inputs and outputs. This module serves as a valuable diagnostic and real-time display tool for the micro development as well as for the find operational systems. This paper describes the functions provided by the board for the NSLS micro control monitor software

  12. Intelligent video surveillance systems and technology

    CERN Document Server

    Ma, Yunqian

    2009-01-01

    From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated

  13. Three-dimensional Imaging, Visualization, and Display

    CERN Document Server

    Javidi, Bahram; Son, Jung-Young

    2009-01-01

    Three-Dimensional Imaging, Visualization, and Display describes recent developments, as well as the prospects and challenges facing 3D imaging, visualization, and display systems and devices. With the rapid advances in electronics, hardware, and software, 3D imaging techniques can now be implemented with commercially available components and can be used for many applications. This volume discusses the state-of-the-art in 3D display and visualization technologies, including binocular, multi-view, holographic, and image reproduction and capture techniques. It also covers 3D optical systems, 3D display instruments, 3D imaging applications, and details several attractive methods for producing 3D moving pictures. This book integrates the background material with new advances and applications in the field, and the available online supplement will include full color videos of 3D display systems. Three-Dimensional Imaging, Visualization, and Display is suitable for electrical engineers, computer scientists, optical e...

  14. Display systems for NPP control

    International Nuclear Information System (INIS)

    Rozov, S.S.

    1988-01-01

    Main trends in development of display systems used as the means for image displaying in NPP control systems are considered. It is shown that colour display devices appear to be the most universal means for concentrated data presentation. Along with digital means the display systems provide for high-speed response, sufficient for operative control of executive mechanisms. A conclusion is drawn that further development of display systems will move towards creation of large colour fields (on reflection base or with multicolour gas-discharge elements)

  15. Acquisition, compression and rendering of depth and texture for multi-view video

    NARCIS (Netherlands)

    Morvan, Y.

    2009-01-01

    Three-dimensional (3D) video and imaging technologies is an emerging trend in the development of digital video systems, as we presently witness the appearance of 3D displays, coding systems, and 3D camera setups. Three-dimensional multi-view video is typically obtained from a set of synchronized

  16. New ultraportable display technology and applications

    Science.gov (United States)

    Alvelda, Phillip; Lewis, Nancy D.

    1998-08-01

    MicroDisplay devices are based on a combination of technologies rooted in the extreme integration capability of conventionally fabricated CMOS active-matrix liquid crystal display substrates. Customized diffraction grating and optical distortion correction technology for lens-system compensation allow the elimination of many lenses and systems-level components. The MicroDisplay Corporation's miniature integrated information display technology is rapidly leading to many new defense and commercial applications. There are no moving parts in MicroDisplay substrates, and the fabrication of the color generating gratings, already part of the CMOS circuit fabrication process, is effectively cost and manufacturing process-free. The entire suite of the MicroDisplay Corporation's technologies was devised to create a line of application- specific integrated circuit single-chip display systems with integrated computing, memory, and communication circuitry. Next-generation portable communication, computer, and consumer electronic devices such as truly portable monitor and TV projectors, eyeglass and head mounted displays, pagers and Personal Communication Services hand-sets, and wristwatch-mounted video phones are among the may target commercial markets for MicroDisplay technology. Defense applications range from Maintenance and Repair support, to night-vision systems, to portable projectors for mobile command and control centers.

  17. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.

  18. Real-time video streaming system for LHD experiment using IP multicast

    International Nuclear Information System (INIS)

    Emoto, Masahiko; Yamamoto, Takashi; Yoshida, Masanobu; Nagayama, Yoshio; Hasegawa, Makoto

    2009-01-01

    In order to accomplish smooth cooperation research, remote participation plays an important role. For this purpose, the authors have been developing various applications for remote participation for the LHD (Large Helical Device) experiments, such as Web interface for visualization of acquired data. The video streaming system is one of them. It is useful to grasp the status of the ongoing experiment remotely, and we provide the video images displayed in the control room to the remote users. However, usual streaming servers cannot send video images without delay. The delay changes depending on how to send the images, but even a little delay might become critical if the researchers use the images to adjust the diagnostic devices. One of the main causes of delay is the procedure of compressing and decompressing the images. Furthermore, commonly used video compression method is lossy; it removes less important information to reduce the size. However, lossy images cannot be used for physical analysis because the original information is lost. Therefore, video images for remote participation should be sent without compression in order to minimize the delay and to supply high quality images durable for physical analysis. However, sending uncompressed video images requires large network bandwidth. For example, sending 5 frames of 16bit color SXGA images a second requires 100Mbps. Furthermore, the video images must be sent to several remote sites simultaneously. It is hard for a server PC to handle such a large data. To cope with this problem, the authors adopted IP multicast to send video images to several remote sites at once. Because IP multicast packets are sent only to the network on which the clients want the data; the load of the server does not depend on the number of clients and the network load is reduced. In this paper, the authors discuss the feasibility of high bandwidth video streaming system using IP multicast. (author)

  19. Radiation survey at video display terminals (VDTs): a credibility issue

    International Nuclear Information System (INIS)

    Deitchman, R.; Gross, L.

    1986-01-01

    New York Telephone and Harvard University routinely monitor video display terminals as part of employee education or office design projects. Measurements are made with sensitive geiger end window or pancake detectors at the screen surface. In previous years all measurements indicated no difference from background levels in the occupied space. Recently, some newer VDTs were found to have measurable levels consistently above normal background at the screen surface. A gamma spectral analysis was made of one of the VDTs using a high resolution Ge-Li gamma ray detector coupled to a multi-channel gamma ray spectrometer. A slightly elevated potassium-40 level was detected and it was hypothesized that the potassium-40 was contained in the glass of the screen surface. The authors recommend that VDTs should be surveyed with the unit turned off to determine if the source of elevated readings may be in the glass. They also recommend expert advice in determining the proper radiation monitoring instrumentation for use in making these measurements

  20. Effects of Viewing Displays from Different Distances on Human Visual System

    Directory of Open Access Journals (Sweden)

    Mohamed Z. Ramadan

    2017-11-01

    Full Text Available The current stereoscopic 3D displays have several human-factor issues including visual-fatigue symptoms such as eyestrain, headache, fatigue, nausea, and malaise. The viewing time and viewing distance are factors that considerably affect the visual fatigue associated with 3D displays. Hence, this study analyzes the effects of display type (2D vs. 3D and viewing distance on visual fatigue during a 60-min viewing session based on electroencephalogram (EEG relative beta power, and alpha/beta power ratio. In this study, twenty male participants watched four videos. The EEGs were recorded at two occipital lobes (O1 and O2 of each participant in the pre-session (3 min, post-session (3 min, and during a 60-min viewing session. The results showed that the decrease in relative beta power of the EEG and the increase in the alpha/beta ratio from the start until the end of the viewing session were significantly higher when watching the 3D display. When the viewing distance was increased from 1.95 m to 3.90 m, the visual fatigue was decreased in the case of the 3D-display, whereas the fatigue was increased in the case of the 2D-display. Moreover, there was approximately the same level of visual fatigue when watching videos in 2D or 3D from a long viewing distance (3.90 m.

  1. State of the art in video system performance

    Science.gov (United States)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  2. Enhancement system of nighttime infrared video image and visible video image

    Science.gov (United States)

    Wang, Yue; Piao, Yan

    2016-11-01

    Visibility of Nighttime video image has a great significance for military and medicine areas, but nighttime video image has so poor quality that we can't recognize the target and background. Thus we enhance the nighttime video image by fuse infrared video image and visible video image. According to the characteristics of infrared and visible images, we proposed improved sift algorithm andαβ weighted algorithm to fuse heterologous nighttime images. We would deduced a transfer matrix from improved sift algorithm. The transfer matrix would rapid register heterologous nighttime images. And theαβ weighted algorithm can be applied in any scene. In the video image fusion system, we used the transfer matrix to register every frame and then used αβ weighted method to fuse every frame, which reached the time requirement soft video. The fused video image not only retains the clear target information of infrared video image, but also retains the detail and color information of visible video image and the fused video image can fluency play.

  3. Ultrasonic recording and display techniques for the inspection of nuclear power plant

    International Nuclear Information System (INIS)

    Ely, R.W.; Hall, G.D.; Johnson, A.; Pascoe, P.T.

    1985-01-01

    This paper describes four systems: MDU, PURDIE, LAURA and DRUID, under development as ultrasonic recording and display techniques for the inspection of nuclear power plant. The MDU system plots either plan or sectional views of the component under test onto a bistable storage screen. PURDIE is a system based around a video cassette recorder which has been modified to record ultrasonic A-scan waveforms and probe positional information. MDU and PURDIE are portable systems, for use under difficult site conditions. They may be manufactured in quantity to satisfy the demanding inspection programmes of nuclear power stations. LAURA is a desk top replay system for the video cassette tapes produced on site by PURDIE. DRUID is a digital desk top replay/display system incorporating a high resolution colour graphics terminal and therefore offering more flexibility and improved display formats. The systems are compatible with each other and some component units are directly interchangeable between the various systems

  4. Cobra: A content-based video retrieval system

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, W.; Jensen, C.S.; Jeffery, K.G.; Pokorny, J.; Saltenis, S.; Bertino, E.; Böhm, K.; Jarke, M.

    2002-01-01

    An increasing number of large publicly available video libraries results in a demand for techniques that can manipulate the video data based on content. In this paper, we present a content-based video retrieval system called Cobra. The system supports automatic extraction and retrieval of high-level

  5. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    Science.gov (United States)

    Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt

    2013-01-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video

  6. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    Science.gov (United States)

    Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.

    2013-12-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video

  7. Mechanisms of video-game epilepsy.

    Science.gov (United States)

    Fylan, F; Harding, G F; Edson, A S; Webb, R M

    1999-01-01

    We aimed to elucidate the mechanisms underlying video-game epilepsy by comparing the flicker- and spatial-frequency ranges over which photic and pattern stimulation elicited photoparoxysmal responses in two different populations: (a) 25 patients with a history of seizures experienced while playing video games; and (b) 25 age- and medication-matched controls with a history of photosensitive epilepsy, but no history of video-game seizures. Abnormality ranges were determined by measuring photoparoxysmal EEG abnormalities as a function of the flicker frequency of patterned and diffuse intermittent photic stimulation (IPS) and the spatial frequency of patterns on a raster display. There was no significant difference between the groups in respect of the abnormality ranges elicited by patterned or diffuse IPS or by spatial patterns. When the groups were compared at one specific IPS frequency (-50 Hz), however, the flicker frequency of European television displays, the video-game patients were significantly more likely to be sensitive. The results suggest that video-game seizures are a manifestation of photosensitive epilepsy. The increased sensitivity of video-game patients to IPS at 50 Hz indicates that display flicker may underlie video-game seizures. The similarity in photic- and pattern-stimulation ranges over which abnormalities are elicited in video-game patients and controls suggests that all patients with photosensitive epilepsy may be predisposed toward video-game-induced seizures. Photosensitivity screening should therefore include assessment by using both IPS at 50 Hz and patterns displayed on a television or monitor with a 50-Hz frame rate.

  8. Advanced Colorimetry of Display Systems: Tetra-Chroma3 Display Unit

    Directory of Open Access Journals (Sweden)

    J. Kaiser

    2005-06-01

    Full Text Available High-fidelity color image reproduction is one of the key issues invisual telecommunication systems, for electronic commerce,telemedicine, digital museum and so on. All colorimetric standards ofdisplay systems are up to the present day trichromatic. But, from theshape of a horseshoe-area of all existing colors in the CIE xychromaticity diagram it follows that with three real reproductivelights, the stated area in the CIE xy chromaticity diagram cannot beoverlaid. The expansion of the color gamut of a display device ispossible in a few ways. In this paper, the way of increasing the numberof primaries is studied. The fourth cyan primary is added to threeconventional ones to enlarge the color gamut of reproduction towardscyans and yellow-oranges. The original method of color management forthis new display unit is introduced. In addition, the color gamut ofthe designed additive-based display is successfully compared with thecolor gamut of a modern subtractive-based system. A display with morethan three primary colors is called a multiprimary color display. Thevery advantageous property of such display is the possibility todisplay metameric colors.

  9. Maximizing Resource Utilization in Video Streaming Systems

    Science.gov (United States)

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  10. Simulated laparoscopy using a head-mounted display vs traditional video monitor: an assessment of performance and muscle fatigue.

    Science.gov (United States)

    Maithel, S K; Villegas, L; Stylopoulos, N; Dawson, S; Jones, D B

    2005-03-01

    The direction of visual gaze may be an important ergonomic factor that affects operative performance. We designed a study to determine whether a head-mounted display (HMD) worn by the surgeon would improve task performance and/or reduce muscle fatigue during a laparoscopic task when compared to the use of a traditional video monitor display (VMD). Surgical residents (n = 30) were enrolled in the study. A junior group, consisting of 15 postgraduate year (PGY) = 1 subjects with no previous laparoscopic experience, and a senior group, consisting of 15 PGY 4 and PGY 5 subjects with experience, completed a laparoscopic task that was repeated four times using the Computer Enhanced Laparoscopic Training System (CELTS). Groups alternated between using the HMD with the task placed in a downward frontal position and the VMD with the task at a 30 degrees lateral angle. The CELTS module assessed task completion time, depth perception, path length of instruments, response orientation, motion smoothness; the system then generated an overall score. Electromyography (EMG) was used to record sternocleidomastoid muscle activity. Display preference was surveyed. The senior residents performed better than the junior residents overall on all parameters (p < 0.05) except for motion smoothness, where there was no difference. In both groups, the HMD significantly improved motion smoothness when compared to the VMD (p < 0.05). All other parameters were equal. There was less muscle fatigue when using the VMD (p < 0.05). We found that 66% of the junior residents but only 20% of the senior residents preferred the HMD. The CELTS module demonstrated evidence of construct validity by differentiating the performances of junior and senior residents. By aligning the surgeon's visual gaze with the instruments, HMD improved smoothness of motion. Experienced residents preferred the traditional monitor display. Although the VMD produced less muscle fatigue, inexperienced residents preferred the HMD

  11. 78 FR 11988 - Open Video Systems

    Science.gov (United States)

    2013-02-21

    ... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 76 [CS Docket No. 96-46, FCC 96-334] Open Video Systems AGENCY: Federal Communications Commission. ACTION: Final rule; announcement of effective date... 43160, August 21, 1996. The final rules modified rules and policies concerning Open Video Systems. DATES...

  12. High Resolution Displays Using NCAP Liquid Crystals

    Science.gov (United States)

    Macknick, A. Brian; Jones, Phil; White, Larry

    1989-07-01

    Nematic curvilinear aligned phase (NCAP) liquid crystals have been found useful for high information content video displays. NCAP materials are liquid crystals which have been encapsulated in a polymer matrix and which have a light transmission which is variable with applied electric fields. Because NCAP materials do not require polarizers, their on-state transmission is substantially better than twisted nematic cells. All dimensional tolerances are locked in during the encapsulation process and hence there are no critical sealing or spacing issues. By controlling the polymer/liquid crystal morphology, switching speeds of NCAP materials have been significantly improved over twisted nematic systems. Recent work has combined active matrix addressing with NCAP materials. Active matrices, such as thin film transistors, have given displays of high resolution. The paper will discuss the advantages of NCAP materials specifically designed for operation at video rates on transistor arrays; applications for both backlit and projection displays will be discussed.

  13. Safety parameter display system functions are integrated parts of the KWU KONVOI process information system (SPDS functions are parts of the KWU-PRINS)

    International Nuclear Information System (INIS)

    Aleite, W.; Geyer, K.H.

    1984-01-01

    The desirability of having flexible overview as well as extended detail information with pictorial and abstraction features and easy and quick access throughout the large-size control rooms in German plants has been recognized. Developments over the last years now make it possible to add on extensive computer driven VDU-systems to the three German KONVOI NPPs (Isar II, Emsland and Neckarwestheim II) thereby creating the Process Information System ''PRINS''. The new system is driven by multiple computers at different locations controlling about 30 full-graphic, high resolution Video Display Units. They are arranged singly and in three ''mxn - Information Panels'' distributed about the control room and present all thinkable kinds of display formats with more than 1000 separate pictures. The display of only single ''Safety Parameters'' or even complete ''Safety Goal Information'' on single or multiple VDUs in parallel is only one aspect of this computerized part of the entire integrated Information System. (orig./HP)

  14. Application of Video Recognition Technology in Landslide Monitoring System

    Directory of Open Access Journals (Sweden)

    Qingjia Meng

    2018-01-01

    Full Text Available The video recognition technology is applied to the landslide emergency remote monitoring system. The trajectories of the landslide are identified by this system in this paper. The system of geological disaster monitoring is applied synthetically to realize the analysis of landslide monitoring data and the combination of video recognition technology. Landslide video monitoring system will video image information, time point, network signal strength, power supply through the 4G network transmission to the server. The data is comprehensively analysed though the remote man-machine interface to conduct to achieve the threshold or manual control to determine the front-end video surveillance system. The system is used to identify the target landslide video for intelligent identification. The algorithm is embedded in the intelligent analysis module, and the video frame is identified, detected, analysed, filtered, and morphological treatment. The algorithm based on artificial intelligence and pattern recognition is used to mark the target landslide in the video screen and confirm whether the landslide is normal. The landslide video monitoring system realizes the remote monitoring and control of the mobile side, and provides a quick and easy monitoring technology.

  15. Design of a system based on DSP and FPGA for video recording and replaying

    Science.gov (United States)

    Kang, Yan; Wang, Heng

    2013-08-01

    This paper brings forward a video recording and replaying system with the architecture of Digital Signal Processor (DSP) and Field Programmable Gate Array (FPGA). The system achieved encoding, recording, decoding and replaying of Video Graphics Array (VGA) signals which are displayed on a monitor during airplanes and ships' navigating. In the architecture, the DSP is a main processor which is used for a large amount of complicated calculation during digital signal processing. The FPGA is a coprocessor for preprocessing video signals and implementing logic control in the system. In the hardware design of the system, Peripheral Device Transfer (PDT) function of the External Memory Interface (EMIF) is utilized to implement seamless interface among the DSP, the synchronous dynamic RAM (SDRAM) and the First-In-First-Out (FIFO) in the system. This transfer mode can avoid the bottle-neck of the data transfer and simplify the circuit between the DSP and its peripheral chips. The DSP's EMIF and two level matching chips are used to implement Advanced Technology Attachment (ATA) protocol on physical layer of the interface of an Integrated Drive Electronics (IDE) Hard Disk (HD), which has a high speed in data access and does not rely on a computer. Main functions of the logic on the FPGA are described and the screenshots of the behavioral simulation are provided in this paper. In the design of program on the DSP, Enhanced Direct Memory Access (EDMA) channels are used to transfer data between the FIFO and the SDRAM to exert the CPU's high performance on computing without intervention by the CPU and save its time spending. JPEG2000 is implemented to obtain high fidelity in video recording and replaying. Ways and means of acquiring high performance for code are briefly present. The ability of data processing of the system is desirable. And smoothness of the replayed video is acceptable. By right of its design flexibility and reliable operation, the system based on DSP and FPGA

  16. Video System for Viewing From a Remote or Windowless Cockpit

    Science.gov (United States)

    Banerjee, Amamath

    2009-01-01

    A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.

  17. Real-Time FPGA-Based Object Tracker with Automatic Pan-Tilt Features for Smart Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-05-01

    Full Text Available The design of smart video surveillance systems is an active research field among the computer vision community because of their ability to perform automatic scene analysis by selecting and tracking the objects of interest. In this paper, we present the design and implementation of an FPGA-based standalone working prototype system for real-time tracking of an object of interest in live video streams for such systems. In addition to real-time tracking of the object of interest, the implemented system is also capable of providing purposive automatic camera movement (pan-tilt in the direction determined by movement of the tracked object. The complete system, including camera interface, DDR2 external memory interface controller, designed object tracking VLSI architecture, camera movement controller and display interface, has been implemented on the Xilinx ML510 (Virtex-5 FX130T FPGA Board. Our proposed, designed and implemented system robustly tracks the target object present in the scene in real time for standard PAL (720 × 576 resolution color video and automatically controls camera movement in the direction determined by the movement of the tracked object.

  18. Dedicated data recording video system for Spacelab experiments

    Science.gov (United States)

    Fukuda, Toshiyuki; Tanaka, Shoji; Fujiwara, Shinji; Onozuka, Kuniharu

    1984-04-01

    A feasibility study of video tape recorder (VTR) modification to add the capability of data recording etc. was conducted. This system is an on-broad system to support Spacelab experiments as a dedicated video system and a dedicated data recording system to operate independently of the normal operation of the Orbiter, Spacelab and the other experiments. It continuously records the video image signals with the acquired data, status and operator's voice at the same time on one cassette video tape. Such things, the crews' actions, animals' behavior, microscopic views and melting materials in furnace, etc. are recorded. So, it is expected that experimenters can make a very easy and convenient analysis of the synchronized video, voice and data signals in their post flight analysis.

  19. Analysis of User Requirements in Interactive 3D Video Systems

    Directory of Open Access Journals (Sweden)

    Haiyue Yuan

    2012-01-01

    Full Text Available The recent development of three dimensional (3D display technologies has resulted in a proliferation of 3D video production and broadcasting, attracting a lot of research into capture, compression and delivery of stereoscopic content. However, the predominant design practice of interactions with 3D video content has failed to address its differences and possibilities in comparison to the existing 2D video interactions. This paper presents a study of user requirements related to interaction with the stereoscopic 3D video. The study suggests that the change of view, zoom in/out, dynamic video browsing, and textual information are the most relevant interactions with stereoscopic 3D video. In addition, we identified a strong demand for object selection that resulted in a follow-up study of user preferences in 3D selection using virtual-hand and ray-casting metaphors. These results indicate that interaction modality affects users’ decision of object selection in terms of chosen location in 3D, while user attitudes do not have significant impact. Furthermore, the ray-casting-based interaction modality using Wiimote can outperform the volume-based interaction modality using mouse and keyboard for object positioning accuracy.

  20. An integrated circuit/packet switched video conferencing system

    Energy Technology Data Exchange (ETDEWEB)

    Kippenhan Junior, H.A.; Lidinsky, W.P.; Roediger, G.A. [Fermi National Accelerator Lab., Batavia, IL (United States). HEP Network Resource Center; Waits, T.A. [Rutgers Univ., Piscataway, NJ (United States). Dept. of Physics and Astronomy

    1996-07-01

    The HEP Network Resource Center (HEPNRC) at Fermilab and the Collider Detector Facility (CDF) collaboration have evolved a flexible, cost-effective, widely accessible video conferencing system for use by high energy physics collaborations and others wishing to use video conferencing. No current systems seemed to fully meet the needs of high energy physics collaborations. However, two classes of video conferencing technology: circuit-switched and packet-switched, if integrated, might encompass most of HEPS's needs. It was also realized that, even with this integration, some additional functions were needed and some of the existing functions were not always wanted. HEPNRC with the help of members of the CDF collaboration set out to develop such an integrated system using as many existing subsystems and components as possible. This system is called VUPAC (Video conferencing Using Packets and Circuits). This paper begins with brief descriptions of the circuit-switched and packet-switched video conferencing systems. Following this, issues and limitations of these systems are considered. Next the VUPAC system is described. Integration is accomplished primarily by a circuit/packet video conferencing interface. Augmentation is centered in another subsystem called MSB (Multiport MultiSession Bridge). Finally, there is a discussion of the future work needed in the evolution of this system. (author)

  1. An integrated circuit/packet switched video conferencing system

    International Nuclear Information System (INIS)

    Kippenhan Junior, H.A.; Lidinsky, W.P.; Roediger, G.A.; Waits, T.A.

    1996-01-01

    The HEP Network Resource Center (HEPNRC) at Fermilab and the Collider Detector Facility (CDF) collaboration have evolved a flexible, cost-effective, widely accessible video conferencing system for use by high energy physics collaborations and others wishing to use video conferencing. No current systems seemed to fully meet the needs of high energy physics collaborations. However, two classes of video conferencing technology: circuit-switched and packet-switched, if integrated, might encompass most of HEPS's needs. It was also realized that, even with this integration, some additional functions were needed and some of the existing functions were not always wanted. HEPNRC with the help of members of the CDF collaboration set out to develop such an integrated system using as many existing subsystems and components as possible. This system is called VUPAC (Video conferencing Using Packets and Circuits). This paper begins with brief descriptions of the circuit-switched and packet-switched video conferencing systems. Following this, issues and limitations of these systems are considered. Next the VUPAC system is described. Integration is accomplished primarily by a circuit/packet video conferencing interface. Augmentation is centered in another subsystem called MSB (Multiport MultiSession Bridge). Finally, there is a discussion of the future work needed in the evolution of this system. (author)

  2. A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

    Science.gov (United States)

    Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.

    2012-02-01

    Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.

  3. Encrypted IP video communication system

    Science.gov (United States)

    Bogdan, Apetrechioaie; Luminiţa, Mateescu

    2010-11-01

    Digital video transmission is a permanent subject of development, research and improvement. This field of research has an exponentially growing market in civil, surveillance, security and military aplications. A lot of solutions: FPGA, ASIC, DSP have been used for this purpose. The paper presents the implementation of an encrypted, IP based, video communication system having a competitive performance/cost ratio .

  4. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  5. COMPARISON OF 2D AND 3D VIDEO DISPLAYS FOR TEACHING VITREORETINAL SURGERY.

    Science.gov (United States)

    Chhaya, Nisarg; Helmy, Omar; Piri, Niloofar; Palacio, Agustina; Schaal, Shlomit

    2017-07-11

    To compare medical students' learning uptake and understanding of vitreoretinal surgeries by watching either 2D or 3D video recordings. Three vitreoretinal procedures (tractional retinal detachment, exposed scleral buckle removal, and four-point scleral fixation of an intraocular lens [TSS]) were recorded simultaneously with a conventional recorder for two-dimensional viewing and a VERION 3D HD system using Sony HVO-1000MD for three-dimensional viewing. Two videos of each surgery, one 2D and the other 3D, were edited to have the same content side by side. One hundred UMass medical students randomly assigned to a 2D group or 3D, then watched corresponding videos on a MacBook. All groups wore BiAL Red-blue 3D glasses and were appropriately randomized. Students filled out questionnaires about surgical steps or anatomical relationships of the pathologies or tissues, and their answers were compared. There was no significant difference in comprehension between the two groups for the extraocular scleral buckle procedure. However, for the intraocular TSS and tractional retinal detachment videos, the 3D group performed better than 2D (P < 0.05) on anatomy comprehension questions. Three-dimensional videos may have value in teaching intraocular ophthalmic surgeries. Surgical procedure steps and basic ocular anatomy may have to be reviewed to ensure maximal teaching efficacy.

  6. Video-based Mobile Mapping System Using Smartphones

    Science.gov (United States)

    Al-Hamad, A.; Moussa, A.; El-Sheimy, N.

    2014-11-01

    The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.

  7. Data display with the Q system

    International Nuclear Information System (INIS)

    Oothoudt, M.A.

    1979-01-01

    The Q data-acquisition system for PDP-11 mini-computers at the Clinton P. Anderson Meson Physics Facility (LAMPF) provides experimenters with basic tools for on-line data display. Tasks are available to plot one- and two-parameter histograms on Tektronix 4000 series storage-tube terminals. The histograms to be displayed and the display format may be selected with simple keyboard commands. A task is also available to create and display live two-parameter scatter plots for any acquired or calculated quantities. Other tasks in the system manage the display data base, list display parameters and histogram contents on hardcopy devices, and save core histograms on disk or tape for off-line analysis. 8 figures

  8. Video game training and the reward system

    OpenAIRE

    Lorenz, R.; Gleich, T.; Gallinat, J.; Kühn, S.

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors ...

  9. Quality of Experience for Large Ultra-High-Resolution Tiled Displays with Synchronization Mismatch

    Directory of Open Access Journals (Sweden)

    Deshpande Sachin

    2011-01-01

    Full Text Available This paper relates to quality of experience when viewing images, video, or other content on large ultra-high-resolution displays made from individual display tiles. We define experiments to measure vernier acuity caused by synchronization mismatch for moving images. The experiments are used to obtain synchronization mismatch acuity threshold as a function of object velocity and as a function of occlusion or gap width. Our main motivation for measuring the synchronization mismatch vernier acuity is its relevance in the application of tiled display systems, which create a single contiguous image using individual discrete panels arranged in a matrix with each panel utilizing a distributed synchronization algorithm to display parts of the overall image. We also propose a subjective assessment method for perception evaluation of synchronization mismatch for large ultra-high-resolution tiled displays. For this, we design a synchronization mismatch measurement test video set for various tile configurations for various interpanel synchronization mismatch values. The proposed method for synchronization mismatch perception can evaluate tiled displays with or without tile bezels. The results from this work can help during design of low-cost tiled display systems, which utilize distributed synchronization mechanisms for a contiguous or bezeled image display.

  10. Natural display mode for digital DICOM-conformant diagnostic imaging.

    Science.gov (United States)

    Peters, Klaus-Ruediger; Ramsby, Gale R

    2002-09-01

    The authors performed this study to investigate the verification of the contrast display properties defined by the digital imaging and communication in medicine (DICOM) PS (picture archiving and communication system [PACS] standard) 3.14-2001 gray-scale display function standard and their dependency on display luminance range and video signal bandwidth. Contrast sensitivity and contrast linearity of DICOM-conformant displays were measured in just-noticeable differences (JNDs) on special perceptual contrast test patterns. Measurements were obtained six times at various display settings under dark room conditions. Display luminance range and video bandwidth had a significant effect on contrast perception. The perceptual promises of the standard could be established only with displays that were calibrated to a unity contrast resolution, at which the number of displayed intensity steps was equal to the number of perceivable contrast steps (JNDs). Such display conditions provide for visual perception information at the level of single-step contrast sensitivity and full-range contrast linearity. These "natural display" conditions also help minimize the Mach banding effects that otherwise reduce contrast sensitivity and contrast linearity. Most, if not all, conventionally used digital display modalities are driven with a contrast resolution larger than 1. Such conditions reduce contrast perception when compared with natural imaging conditions. The DICOM-conformant display conditions at unity contrast resolution were characterized as the "natural display" mode, and, thus, the authors a priori recommend them as being useful for making a primary diagnosis with PACS and teleradiology and as a standard for psychophysical research and performance measurements.

  11. Operational characteristics of pediatric radiology: Image display stations

    International Nuclear Information System (INIS)

    Taira, R.K.

    1987-01-01

    The display of diagnostic images is accomplished in the UCLA Pediatric Radiology Clinical Radiology Imaging System (CRIS) using 3 different types of digital viewing stations. These include a low resolution station with six 512 x 512 monitors, a high resolution station with three 1024 x 1024 monitors, and a very high resolution workstation with two 2048 x 2048 monitors. The display stations provide very basic image processing manipulations including zoom and scroll, contrast enhancement, and contrast reversal. The display stations are driven by a computer system which is dedicated for clinical use. During times when the clinical computer is unavailable (maintenance or system malfunction), the 512 x 512 workstation can be switched to operate from a research PACS system in the UCLA Image Processing Laboratory via a broadband communication network. Our initial clinical implementation involves digital viewing for pediatric radiology conferences. Presentation of inpatient cases use the six monitor 512 x 512 multiple viewing station. Later stages of the clinical implementation involve the use of higher resolution displays for the purpose of primary diagnosis from video displays

  12. Advanced Transport Operating System (ATOPS) color displays software description microprocessor system

    Science.gov (United States)

    Slominski, Christopher J.; Plyler, Valerie E.; Dickson, Richard W.

    1992-01-01

    This document describes the software created for the Sperry Microprocessor Color Display System used for the Advanced Transport Operating Systems (ATOPS) project on the Transport Systems Research Vehicle (TSRV). The software delivery known as the 'baseline display system', is the one described in this document. Throughout this publication, module descriptions are presented in a standardized format which contains module purpose, calling sequence, detailed description, and global references. The global reference section includes procedures and common variables referenced by a particular module. The system described supports the Research Flight Deck (RFD) of the TSRV. The RFD contains eight cathode ray tubes (CRTs) which depict a Primary Flight Display, Navigation Display, System Warning Display, Takeoff Performance Monitoring System Display, and Engine Display.

  13. Web-based remote video monitoring system implemented using Java technology

    Science.gov (United States)

    Li, Xiaoming

    2012-04-01

    A HTTP based video transmission system has been built upon the p2p(peer to peer) network structure utilizing the Java technologies. This makes the video monitoring available to any host which has been connected to the World Wide Web in any method, including those hosts behind firewalls or in isolated sub-networking. In order to achieve this, a video source peer has been developed, together with the client video playback peer. The video source peer can respond to the video stream request in HTTP protocol. HTTP based pipe communication model is developed to speeding the transmission of video stream data, which has been encoded into fragments using the JPEG codec. To make the system feasible in conveying video streams between arbitrary peers on the web, a HTTP protocol based relay peer is implemented as well. This video monitoring system has been applied in a tele-robotic system as a visual feedback to the operator.

  14. Performance of a video-image-subtraction-based patient positioning system

    International Nuclear Information System (INIS)

    Milliken, Barrett D.; Rubin, Steven J.; Hamilton, Russell J.; Johnson, L. Scott; Chen, George T.Y.

    1997-01-01

    Purpose: We have developed and tested an interactive video system that utilizes image subtraction techniques to enable high precision patient repositioning using surface features. We report quantitative measurements of system performance characteristics. Methods and Materials: Video images can provide a high precision, low cost measure of patient position. Image subtraction techniques enable one to incorporate detailed information contained in the image of a carefully verified reference position into real-time images. We have developed a system using video cameras providing orthogonal images of the treatment setup. The images are acquired, processed and viewed using an inexpensive frame grabber and a PC. The subtraction images provide the interactive guidance needed to quickly and accurately place a patient in the same position for each treatment session. We describe the design and implementation of our system, and its quantitative performance, using images both to measure changes in position, and to achieve accurate setup reproducibility. Results: Under clinical conditions (60 cm field of view, 3.6 m object distance), the position of static, high contrast objects could be measured with a resolution of 0.04 mm (rms) in each of two dimensions. The two-dimensional position could be reproduced using the real-time image display with a resolution of 0.15 mm (rms). Two-dimensional measurement resolution of the head of a patient undergoing treatment for head and neck cancer was 0.1 mm (rms), using a lateral view, measuring the variation in position of the nose and the ear over the course of a single radiation treatment. Three-dimensional repositioning accuracy of the head of a healthy volunteer using orthogonal camera views was less than 0.7 mm (systematic error) with an rms variation of 1.2 mm. Setup adjustments based on the video images were typically performed within a few minutes. The higher precision achieved using the system to measure objects than to reposition

  15. Three-dimensional hologram display system

    Science.gov (United States)

    Mintz, Frederick (Inventor); Chao, Tien-Hsin (Inventor); Bryant, Nevin (Inventor); Tsou, Peter (Inventor)

    2009-01-01

    The present invention relates to a three-dimensional (3D) hologram display system. The 3D hologram display system includes a projector device for projecting an image upon a display medium to form a 3D hologram. The 3D hologram is formed such that a viewer can view the holographic image from multiple angles up to 360 degrees. Multiple display media are described, namely a spinning diffusive screen, a circular diffuser screen, and an aerogel. The spinning diffusive screen utilizes spatial light modulators to control the image such that the 3D image is displayed on the rotating screen in a time-multiplexing manner. The circular diffuser screen includes multiple, simultaneously-operated projectors to project the image onto the circular diffuser screen from a plurality of locations, thereby forming the 3D image. The aerogel can use the projection device described as applicable to either the spinning diffusive screen or the circular diffuser screen.

  16. Varifocal mirror display of organ surfaces from CT scans

    International Nuclear Information System (INIS)

    Pizer, S.M.; Fuchs, H.; Bloomberg, S.H.; Li Ching Tsai; Heinz, E.R.

    1982-01-01

    A means will be presented of constructing a powerful varifocal mirror 3D display system with limited cost based on an ordinary color video digital display system. The importance of dynamic interactive control of the display of these images will be discussed; in particular, the design and usefulness of a method allowing real-time user-controlled motion of the 3D object being displayed will be discussed. Also, an effective method will be described of presenting images made of surfaces by the straightforward, automatic calculation of 3D edge strength, the ordering of the resulting voxels by edge strength, and the 3D grey-scale display of the top voxels on this ordered list. The application of these ideas to the 3D display of the intimal wall of the region of bifurcation of the carotid artery from 12-24 CT scans of the neck will be discussed

  17. System control and display

    International Nuclear Information System (INIS)

    Jacobs, J.

    1977-01-01

    The system described was designed, developed, and installed on short time scales and primarily utilized of-the-shelf military and commercial hardware. The system was designed to provide security-in-depth and multiple security options with several stages of redundancy. Under normal operating conditions, the system is computer controlled with manual backup during abnormal conditions. Sensor alarm data are processed in conjunction with weather data to reduce nuisance alarms. A structured approach is used to order alarmed sectors for assessment. Alarm and video information is presented to security personnel in an interactive mode. Historical operational data are recorded for system evaluation

  18. Digital image display system for emergency room

    International Nuclear Information System (INIS)

    Murry, R.C.; Lane, T.J.; Miax, L.S.

    1989-01-01

    This paper reports on a digital image display system for the emergency room (ER) in a major trauma hospital. Its objective is to reduce radiographic image delivery time to a busy ER while simultaneously providing a multimodality capability. Image storage, retrieval, and display will also be facilitated with this system. The system's backbone is a token-ring network of RISC and personal computers. The display terminals are higher- function RISC computers with 1,024 2 color or gray-scale monitors. The PCs serve as administrative terminals. Nuclear medicine, CT, MR, and digitized film images are transferred to the image display system

  19. A Smart Spoofing Face Detector by Display Features Analysis

    Directory of Open Access Journals (Sweden)

    ChinLun Lai

    2016-07-01

    Full Text Available In this paper, a smart face liveness detector is proposed to prevent the biometric system from being “deceived” by the video or picture of a valid user that the counterfeiter took with a high definition handheld device (e.g., iPad with retina display. By analyzing the characteristics of the display platform and using an expert decision-making core, we can effectively detect whether a spoofing action comes from a fake face displayed in the high definition display by verifying the chromaticity regions in the captured face. That is, a live or spoof face can be distinguished precisely by the designed optical image sensor. To sum up, by the proposed method/system, a normal optical image sensor can be upgraded to a powerful version to detect the spoofing actions. The experimental results prove that the proposed detection system can achieve very high detection rate compared to the existing methods and thus be practical to implement directly in the authentication systems.

  20. A Smart Spoofing Face Detector by Display Features Analysis.

    Science.gov (United States)

    Lai, ChinLun; Tai, ChiuYuan

    2016-07-21

    In this paper, a smart face liveness detector is proposed to prevent the biometric system from being "deceived" by the video or picture of a valid user that the counterfeiter took with a high definition handheld device (e.g., iPad with retina display). By analyzing the characteristics of the display platform and using an expert decision-making core, we can effectively detect whether a spoofing action comes from a fake face displayed in the high definition display by verifying the chromaticity regions in the captured face. That is, a live or spoof face can be distinguished precisely by the designed optical image sensor. To sum up, by the proposed method/system, a normal optical image sensor can be upgraded to a powerful version to detect the spoofing actions. The experimental results prove that the proposed detection system can achieve very high detection rate compared to the existing methods and thus be practical to implement directly in the authentication systems.

  1. Noise aliasing in interline-video-based fluoroscopy systems

    International Nuclear Information System (INIS)

    Lai, H.; Cunningham, I.A.

    2002-01-01

    Video-based imaging systems for continuous (nonpulsed) x-ray fluoroscopy use a variety of video formats. Conventional video-camera systems may operate in either interlaced or progressive-scan modes, and CCD systems may operate in interline- or frame-transfer modes. A theoretical model of the image noise power spectrum corresponding to these formats is described. It is shown that with respect to frame-transfer or progressive-readout modes, interline or interlaced cameras operating in a frame-integration mode will result in a spectral shift of 25% of the total image noise power from low spatial frequencies to high. In a field-integration mode, noise power is doubled with most of the increase occurring at high spatial frequencies. The differences are due primarily to the effect of noise aliasing. In interline or interlaced formats, alternate lines are obtained with each video field resulting in a vertical sampling frequency for noise that is one half of the physical sampling frequency. The extent of noise aliasing is modified by differences in the statistical correlations between video fields in the different modes. The theoretical model is validated with experiments using an x-ray image intensifier and CCD-camera system. It is shown that different video modes affect the shape of the noise-power spectrum and therefore the detective quantum efficiency. While the effect on observer performance is not addressed, it is concluded that in order to minimize image noise at the critical mid-to-high spatial frequencies for a specified x-ray exposure, fluoroscopic systems should use only frame-transfer (CCD camera) or progressive-scan (conventional video) formats

  2. Motion sickness and postural sway in console video games.

    Science.gov (United States)

    Stoffregen, Thomas A; Faugloire, Elise; Yoshida, Ken; Flanagan, Moira B; Merhi, Omar

    2008-04-01

    We tested the hypotheses that (a) participants might develop motion sickness while playing "off-the-shelf" console video games and (b) postural motion would differ between sick and well participants, prior to the onset of motion sickness. There have been many anecdotal reports of motion sickness among people who play console video games (e.g., Xbox, PlayStation). Participants (40 undergraduate students) played a game continuously for up to 50 min while standing or sitting. We varied the distance to the display screen (and, consequently, the visual angle of the display). Across conditions, the incidence of motion sickness ranged from 42% to 56%; incidence did not differ across conditions. During game play, head and torso motion differed between sick and well participants prior to the onset of subjective symptoms of motion sickness. The results indicate that console video games carry a significant risk of motion sickness. Potential applications of this research include changes in the design of console video games and recommendations for how such systems should be used.

  3. Software for graphic display systems

    International Nuclear Information System (INIS)

    Karlov, A.A.

    1978-01-01

    In this paper some aspects of graphic display systems are discussed. The design of a display subroutine library is described, with an example, and graphic dialogue software is considered primarily from the point of view of the programmer who uses a high-level language. (Auth.)

  4. Interactive Videos Enhance Learning about Socio-Ecological Systems

    Science.gov (United States)

    Smithwick, Erica; Baxter, Emily; Kim, Kyung; Edel-Malizia, Stephanie; Rocco, Stevie; Blackstock, Dean

    2018-01-01

    Two forms of interactive video were assessed in an online course focused on conservation. The hypothesis was that interactive video enhances student perceptions about learning and improves mental models of social-ecological systems. Results showed that students reported greater learning and attitudes toward the subject following interactive video.…

  5. Video game training and the reward system.

    Science.gov (United States)

    Lorenz, Robert C; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  6. Industrial Personal Computer based Display for Nuclear Safety System

    International Nuclear Information System (INIS)

    Kim, Ji Hyeon; Kim, Aram; Jo, Jung Hee; Kim, Ki Beom; Cheon, Sung Hyun; Cho, Joo Hyun; Sohn, Se Do; Baek, Seung Min

    2014-01-01

    The safety display of nuclear system has been classified as important to safety (SIL:Safety Integrity Level 3). These days the regulatory agencies are imposing more strict safety requirements for digital safety display system. To satisfy these requirements, it is necessary to develop a safety-critical (SIL 4) grade safety display system. This paper proposes industrial personal computer based safety display system with safety grade operating system and safety grade display methods. The description consists of three parts, the background, the safety requirements and the proposed safety display system design. The hardware platform is designed using commercially available off-the-shelf processor board with back plane bus. The operating system is customized for nuclear safety display application. The display unit is designed adopting two improvement features, i.e., one is to provide two separate processors for main computer and display device using serial communication, and the other is to use Digital Visual Interface between main computer and display device. In this case the main computer uses minimized graphic functions for safety display. The display design is at the conceptual phase, and there are several open areas to be concreted for a solid system. The main purpose of this paper is to describe and suggest a methodology to develop a safety-critical display system and the descriptions are focused on the safety requirement point of view

  7. Industrial Personal Computer based Display for Nuclear Safety System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji Hyeon; Kim, Aram; Jo, Jung Hee; Kim, Ki Beom; Cheon, Sung Hyun; Cho, Joo Hyun; Sohn, Se Do; Baek, Seung Min [KEPCO, Youngin (Korea, Republic of)

    2014-08-15

    The safety display of nuclear system has been classified as important to safety (SIL:Safety Integrity Level 3). These days the regulatory agencies are imposing more strict safety requirements for digital safety display system. To satisfy these requirements, it is necessary to develop a safety-critical (SIL 4) grade safety display system. This paper proposes industrial personal computer based safety display system with safety grade operating system and safety grade display methods. The description consists of three parts, the background, the safety requirements and the proposed safety display system design. The hardware platform is designed using commercially available off-the-shelf processor board with back plane bus. The operating system is customized for nuclear safety display application. The display unit is designed adopting two improvement features, i.e., one is to provide two separate processors for main computer and display device using serial communication, and the other is to use Digital Visual Interface between main computer and display device. In this case the main computer uses minimized graphic functions for safety display. The display design is at the conceptual phase, and there are several open areas to be concreted for a solid system. The main purpose of this paper is to describe and suggest a methodology to develop a safety-critical display system and the descriptions are focused on the safety requirement point of view.

  8. Initial clinical experience with an interactive, video-based patient-positioning system for head and neck treatment

    International Nuclear Information System (INIS)

    Johnson, L.; Hadley, Scott W.; Milliken, Barrett D.; Pelizzari, Charles A.; Haraf, Daniel J.; Nguyen, Ai; Chen, George T.Y.

    1996-01-01

    Objective: To evaluate an interactive, video-based system for positioning head and neck patients. Materials and Methods: System hardware includes two B and W CCD cameras (mounted to provide left-lateral and AP-inferior views), zoom lenses, and a PC equipped with a frame grabber. Custom software is used to acquire and archive video images, as well as to display real-time subtraction images revealing patient misalignment in multiple views. Live subtraction images are obtained by subtracting a reference image (i.e., an image of the patient in the correct position) from real-time video. As seen in the figure, darker regions of the subtraction image indicate where the patient is currently, while lighter regions indicate where the patient should be. Adjustments in the patient's position are updated and displayed in less than 0.07s, allowing the therapist to interactively detect and correct setup discrepancies. Patients selected for study are treated BID and immobilized with conventional litecast straps attached to a baseframe which is registered to the treatment couch. Morning setups are performed by aligning litecast marks and patient anatomy to treatment room lasers. Afternoon setups begin with the same procedure, and then live subtraction images are used to fine-tune the setup. At morning and afternoon setups, video images and verification films are taken after positioning is complete. These are visually registered offline to determine the distribution of setup errors per patient, with and without video assistance. Results: Without video assistance, the standard deviation of setup errors typically ranged from 5 to 7mm and was patient-dependent. With video assistance, standard deviations are reduced to 1 to 4mm, with the result depending on patient coopertiveness and the length of time spent fine-tuning the setups. At current levels of experience, 3 to 4mm accuracy is easily achieved in about 30s, while 1 to 3mm accuracy is achieved in about 1 to 2 minutes. Studies

  9. Generating high gray-level resolution monochrome displays with conventional computer graphics cards and color monitors.

    Science.gov (United States)

    Li, Xiangrui; Lu, Zhong-Lin; Xu, Pengjing; Jin, Jianzhong; Zhou, Yifeng

    2003-11-30

    Display systems based on conventional computer graphics cards are capable of generating images with about 8-bit luminance resolution. However, most vision experiments require more than 12 bits of luminance resolution. Pelli and Zhang [Spatial Vis. 10 (1997) 443] described a video attenuator for generating high luminance resolution displays on a monochrome monitor, or for driving just the green gun of a color monitor. Here we show how to achieve a white display by adding video amplifiers to duplicate the monochrome signal to drive all three guns of any color monitor. Because of the lack of the availability of high quality monochrome monitors, our method provides an inexpensive way to achieve high-resolution monochromatic displays using conventional, easy-to-get equipment. We describe the design principles, test results, and a few additional functionalities.

  10. Multichannel waveform display system

    International Nuclear Information System (INIS)

    Kolvankar, V.G.

    1989-01-01

    For any multichannel data acquisition system, a multichannel paper chart recorder undoubtedly forms an essential part of the system. When deployed on-line, it instantaneously provides, for visual inspection, hard copies of the signal waveforms on common time base at any desired sensitivity and time resolution. Within the country, only a small range of these strip chart recorder s is available, and under stringent specifications imported recorders are often procured. The cost of such recorders may range from 1 to 5 lakhs of rupees in foreign exchange. A system to provide on the oscilloscope a steady display of multichannel waveforms, refreshed from the digital data stored in the memory is developed. The merits and demerits of the display system are compared with that built around a conventional paper chart recorder. Various illustrations of multichannel seismic event data acquired at Gauribidanur seismic array station are also presented. (author). 2 figs

  11. Advanced Transport Operating System (ATOPS) color displays software description: MicroVAX system

    Science.gov (United States)

    Slominski, Christopher J.; Plyler, Valerie E.; Dickson, Richard W.

    1992-01-01

    This document describes the software created for the Display MicroVAX computer used for the Advanced Transport Operating Systems (ATOPS) project on the Transport Systems Research Vehicle (TSRV). The software delivery of February 27, 1991, known as the 'baseline display system', is the one described in this document. Throughout this publication, module descriptions are presented in a standardized format which contains module purpose, calling sequence, detailed description, and global references. The global references section includes subroutines, functions, and common variables referenced by a particular module. The system described supports the Research Flight Deck (RFD) of the TSRV. The RFD contains eight Cathode Ray Tubes (CRTs) which depict a Primary Flight Display, Navigation Display, System Warning Display, Takeoff Performance Monitoring System Display, and Engine Display.

  12. Panoramic, large-screen, 3-D flight display system design

    Science.gov (United States)

    Franklin, Henry; Larson, Brent; Johnson, Michael; Droessler, Justin; Reinhart, William F.

    1995-01-01

    The report documents and summarizes the results of the required evaluations specified in the SOW and the design specifications for the selected display system hardware. Also included are the proposed development plan and schedule as well as the estimated rough order of magnitude (ROM) cost to design, fabricate, and demonstrate a flyable prototype research flight display system. The thrust of the effort was development of a complete understanding of the user/system requirements for a panoramic, collimated, 3-D flyable avionic display system and the translation of the requirements into an acceptable system design for fabrication and demonstration of a prototype display in the early 1997 time frame. Eleven display system design concepts were presented to NASA LaRC during the program, one of which was down-selected to a preferred display system concept. A set of preliminary display requirements was formulated. The state of the art in image source technology, 3-D methods, collimation methods, and interaction methods for a panoramic, 3-D flight display system were reviewed in depth and evaluated. Display technology improvements and risk reductions associated with maturity of the technologies for the preferred display system design concept were identified.

  13. System Would Generate Virtual Heads-Up Display

    Science.gov (United States)

    Lambert, James L.

    1994-01-01

    Proposed helmet-mounted electronic display system superimposes full-color alphanumerical and/or graphical information onto observer's visual field. Displayed information projected directly onto observer's retinas, giving observer illusion of full-size computer display in foreground or background. Display stereoscopic, holographic, or in form of virtual image. Used by pilots to view navigational information while looking outside or at instruments, by security officers to view information about critical facilities while looking at visitors, or possibly even stock-exchange facilities to view desktop monitors and overhead displays simultaneously. System includes acousto-optical tunable filter (AOTF), which acts as both spectral filter and spatial light modulator.

  14. New Management Tools – From Video Management Systems to Business Decision Systems

    Directory of Open Access Journals (Sweden)

    Emilian Cristian IRIMESCU

    2015-06-01

    Full Text Available In the last decades management was characterized by the increased use of Business Decision Systems, also called Decision Support Systems. More than that, systems that were until now used in a traditional way, for some simple activities (like security, migrated to the decision area of management. Some examples are the Video Management Systems from the physical security activity. This article will underline the way Video Management Systems passed to Business Decision Systems, which are the advantages of use thereof and which are the trends in this industry. The article will also analyze if at this moment Video Management Systems are real Business Decision Systems or if there are some functions missing to rank them at this level.

  15. Video game addiction in emerging adulthood: Cross-sectional evidence of pathology in video game addicts as compared to matched healthy controls.

    Science.gov (United States)

    Stockdale, Laura; Coyne, Sarah M

    2018-01-01

    The Internet Gaming Disorder Scale (IGDS) is a widely used measure of video game addiction, a pathology affecting a small percentage of all people who play video games. Emerging adult males are significantly more likely to be video game addicts. Few researchers have examined how people who qualify as video game addicts based on the IGDS compared to matched controls based on age, gender, race, and marital status. The current study compared IGDS video game addicts to matched non-addicts in terms of their mental, physical, social-emotional health using self-report, survey methods. Addicts had poorer mental health and cognitive functioning including poorer impulse control and ADHD symptoms compared to controls. Additionally, addicts displayed increased emotional difficulties including increased depression and anxiety, felt more socially isolated, and were more likely to display internet pornography pathological use symptoms. Female video game addicts were at unique risk for negative outcomes. The sample for this study was undergraduate college students and self-report measures were used. Participants who met the IGDS criteria for video game addiction displayed poorer emotional, physical, mental, and social health, adding to the growing evidence that video game addictions are a valid phenomenon. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Unattended digital video surveillance: A system prototype for EURATOM safeguards

    International Nuclear Information System (INIS)

    Chare, P.; Goerten, J.; Wagner, H.; Rodriguez, C.; Brown, J.E.

    1994-01-01

    Ever increasing capabilities in video and computer technology have changed the face of video surveillance. From yesterday's film and analog video tape-based systems, we now emerge into the digital era with surveillance systems capable of digital image processing, image analysis, decision control logic, and random data access features -- all of which provide greater versatility with the potential for increased effectiveness in video surveillance. Digital systems also offer other advantages such as the ability to ''compress'' data, providing increased storage capacities and the potential for allowing longer surveillance Periods. Remote surveillance and system to system communications are also a benefit that can be derived from digital surveillance systems. All of these features are extremely important in today's climate Of increasing safeguards activity and decreasing budgets -- Los Alamos National Laboratory's Safeguards Systems Group and the EURATOM Safeguards Directorate have teamed to design and implement a period surveillance system that will take advantage of the versatility of digital video for facility surveillance system that will take advantage of the versatility of digital video for facility surveillance and data review. In this Paper we will familiarize you with system components and features and report on progress in developmental areas such as image compression and region of interest processing

  17. Video Game Training and the Reward System

    Directory of Open Access Journals (Sweden)

    Robert C. Lorenz

    2015-02-01

    Full Text Available Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual towards playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training.Fifty healthy participants were randomly assigned to a video game training (TG or control group (CG. Before and after training/control period, functional magnetic resonance imaging (fMRI was conducted using a non-video game related reward task.At pretest, both groups showed strongest activation in ventral striatum (VS during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated.This longitudinal study revealed that video game training may preserve reward responsiveness in the ventral striatum in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.

  18. Video game training and the reward system

    Science.gov (United States)

    Lorenz, Robert C.; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone

    2015-01-01

    Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training. PMID:25698962

  19. Energy Systems Integration Facility Videos | Energy Systems Integration

    Science.gov (United States)

    Facility | NREL Energy Systems Integration Facility Videos Energy Systems Integration Facility Integration Facility NREL + SolarCity: Maximizing Solar Power on Electrical Grids Redefining What's Possible for Renewable Energy: Grid Integration Robot-Powered Reliability Testing at NREL's ESIF Microgrid

  20. Display-management system for MFTF

    International Nuclear Information System (INIS)

    Nelson, D.O.

    1981-01-01

    The Mirror Fusion Test Facility (MFTF) is controlled by 65 local control microcomputers which are supervised by a local network of nine 32-bit minicomputers. Associated with seven of the nine computers are state-of-the-art graphics devices, each with extensive local processing capability. These devices provide the means for an operator to interact with the control software running on the minicomputers. It is critical that the information the operator views accurately reflects the current state of the experiment. This information is integrated into dynamically changing pictures called displays. The primary organizational component of the display system is the software-addressable segment. The segments created by the display creation software are managed by display managers associated with each graphics device. Each display manager uses sophisticated storage management mechanisms to keep the proper segments resident in the local graphics device storage

  1. Problem with multi-video format M-learning applications

    CSIR Research Space (South Africa)

    Adeyeye, MO

    2014-01-01

    Full Text Available in conjunction with the technical aspects of video display in browsers, when varying media formats are used. The <video> tag used in this work renders videos from two sources with different MIME types. Feeds from the video sources, namely YouTube and UCT...

  2. Nuclear image display controller

    International Nuclear Information System (INIS)

    Roth, D.A.

    1985-01-01

    In a nuclear imaging system the digitized x and y coordinates of gamma ray photon emission events address memory locations corresponding to the coordinates. The respective locations are incremented each time they are addressed so at the end of a selected time or event count period the locations contain digital values or raw data corresponding to the intensity of pixels comprising an image frame. The raw data for a frame is coupled to one input of an arithmetic logic unit (ALU) whose output is coupled to a display controller memory. The output of the controller memory is coupled to another ALU input with a feedback bus and is also coupled to a further signal processing circuit which includes means for converting processed data to analog video signals for television display. The ALU is selectively controlled to let raw image data pass through to the display controllor memory or alternately to add (or subtract) raw data for the last image frame developed to the raw data for preceding frames held in the display controller to thereby produce the visual effect on the television screen of an isotope flowing through anatomy

  3. SnapVideo: Personalized Video Generation for a Sightseeing Trip.

    Science.gov (United States)

    Zhang, Luming; Jing, Peiguang; Su, Yuting; Zhang, Chao; Shaoz, Ling

    2017-11-01

    Leisure tourism is an indispensable activity in urban people's life. Due to the popularity of intelligent mobile devices, a large number of photos and videos are recorded during a trip. Therefore, the ability to vividly and interestingly display these media data is a useful technique. In this paper, we propose SnapVideo, a new method that intelligently converts a personal album describing of a trip into a comprehensive, aesthetically pleasing, and coherent video clip. The proposed framework contains three main components. The scenic spot identification model first personalizes the video clips based on multiple prespecified audience classes. We then search for some auxiliary related videos from YouTube 1 according to the selected photos. To comprehensively describe a scenery, the view generation module clusters the crawled video frames into a number of views. Finally, a probabilistic model is developed to fit the frames from multiple views into an aesthetically pleasing and coherent video clip, which optimally captures the semantics of a sightseeing trip. Extensive user studies demonstrated the competitiveness of our method from an aesthetic point of view. Moreover, quantitative analysis reflects that semantically important spots are well preserved in the final video clip. 1 https://www.youtube.com/.

  4. Applications of Scalable Multipoint Video and Audio Using the Public Internet

    Directory of Open Access Journals (Sweden)

    Robert D. Gaglianello

    2000-01-01

    Full Text Available This paper describes a scalable multipoint video system, designed for efficient generation and display of high quality, multiple resolution, multiple compressed video streams over IP-based networks. We present our experiences using the system over the public Internet for several “real-world” applications, including distance learning, virtual theater, and virtual collaboration. The trials were a combined effort of Bell Laboratories and the Gertrude Stein Repertory Theatre (TGSRT. We also present current advances in the conferencing system since the trials, new areas for application and future applications.

  5. A kind of video image digitizing circuit based on computer parallel port

    International Nuclear Information System (INIS)

    Wang Yi; Tang Le; Cheng Jianping; Li Yuanjing; Zhang Binquan

    2003-01-01

    A kind of video images digitizing circuit based on parallel port was developed to digitize the flash x ray images in our Multi-Channel Digital Flash X ray Imaging System. The circuit can digitize the video images and store in static memory. The digital images can be transferred to computer through parallel port and can be displayed, processed and stored. (authors)

  6. Effect Through Broadcasting System Access Point For Video Transmission

    Directory of Open Access Journals (Sweden)

    Leni Marlina

    2015-08-01

    Full Text Available Most universities are already implementing wired and wireless network that is used to access integrated information systems and the Internet. At present it is important to do research on the influence of the broadcasting system through the access point for video transmitter learning in the university area. At every university computer network through the access point must also use the cable in its implementation. These networks require cables that will connect and transmit data from one computer to another computer. While wireless networks of computers connected through radio waves. This research will be a test or assessment of how the influence of the network using the WLAN access point for video broadcasting means learning from the server to the client. Instructional video broadcasting from the server to the client via the access point will be used for video broadcasting means of learning. This study aims to understand how to build a wireless network by using an access point. It also builds a computer server as instructional videos supporting software that can be used for video server that will be emitted by broadcasting via the access point and establish a system of transmitting video from the server to the client via the access point.

  7. Helping Video Games Rewire "Our Minds"

    Science.gov (United States)

    Pope, Alan T.; Palsson, Olafur S.

    2001-01-01

    Biofeedback-modulated video games are games that respond to physiological signals as well as mouse, joystick or game controller input; they embody the concept of improving physiological functioning by rewarding specific healthy body signals with success at playing a video game. The NASA patented biofeedback-modulated game method blends biofeedback into popular off-the- shelf video games in such a way that the games do not lose their entertainment value. This method uses physiological signals (e.g., electroencephalogram frequency band ratio) not simply to drive a biofeedback display directly, or periodically modify a task as in other systems, but to continuously modulate parameters (e.g., game character speed and mobility) of a game task in real time while the game task is being performed by other means (e.g., a game controller). Biofeedback-modulated video games represent a new generation of computer and video game environments that train valuable mental skills beyond eye-hand coordination. These psychophysiological training technologies are poised to exploit the revolution in interactive multimedia home entertainment for the personal improvement, not just the diversion, of the user.

  8. FPGA Implementation of Video Transmission System Based on LTE

    Directory of Open Access Journals (Sweden)

    Lu Yan

    2015-01-01

    Full Text Available In order to support high-definition video transmission, an implementation of video transmission system based on Long Term Evolution is designed. This system is developed on Xilinx Virtex-6 FPGA ML605 Evaluation Board. The paper elaborates the features of baseband link designed in Xilinx ISE and protocol stack designed in Xilinx SDK, and introduces the process of setting up hardware and software platform in Xilinx XPS. According to test, this system consumes less hardware resource and is able to transmit bidirectional video clearly and stably.

  9. A content-based news video retrieval system: NVRS

    Science.gov (United States)

    Liu, Huayong; He, Tingting

    2009-10-01

    This paper focus on TV news programs and design a content-based news video browsing and retrieval system, NVRS, which is convenient for users to fast browsing and retrieving news video by different categories such as political, finance, amusement, etc. Combining audiovisual features and caption text information, the system automatically segments a complete news program into separate news stories. NVRS supports keyword-based news story retrieval, category-based news story browsing and generates key-frame-based video abstract for each story. Experiments show that the method of story segmentation is effective and the retrieval is also efficient.

  10. Solar active region display system

    Science.gov (United States)

    Golightly, M.; Raben, V.; Weyland, M.

    2003-04-01

    The Solar Active Region Display System (SARDS) is a client-server application that automatically collects a wide range of solar data and displays it in a format easy for users to assimilate and interpret. Users can rapidly identify active regions of interest or concern from color-coded indicators that visually summarize each region's size, magnetic configuration, recent growth history, and recent flare and CME production. The active region information can be overlaid onto solar maps, multiple solar images, and solar difference images in orthographic, Mercator or cylindrical equidistant projections. Near real-time graphs display the GOES soft and hard x-ray flux, flare events, and daily F10.7 value as a function of time; color-coded indicators show current trends in soft x-ray flux, flare temperature, daily F10.7 flux, and x-ray flare occurrence. Through a separate window up to 4 real-time or static graphs can simultaneously display values of KP, AP, daily F10.7 flux, GOES soft and hard x-ray flux, GOES >10 and >100 MeV proton flux, and Thule neutron monitor count rate. Climatologic displays use color-valued cells to show F10.7 and AP values as a function of Carrington/Bartel's rotation sequences - this format allows users to detect recurrent patterns in solar and geomagnetic activity as well as variations in activity levels over multiple solar cycles. Users can customize many of the display and graph features; all displays can be printed or copied to the system's clipboard for "pasting" into other applications. The system obtains and stores space weather data and images from sources such as the NOAA Space Environment Center, NOAA National Geophysical Data Center, the joint ESA/NASA SOHO spacecraft, and the Kitt Peak National Solar Observatory, and can be extended to include other data series and image sources. Data and images retrieved from the system's database are converted to XML and transported from a central server using HTTP and SOAP protocols, allowing

  11. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  12. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  13. Rapid, low-cost, image analysis through video processing

    International Nuclear Information System (INIS)

    Levinson, R.A.; Marrs, R.W.; Grantham, D.G.

    1976-01-01

    Remote Sensing now provides the data necessary to solve many resource problems. However, many of the complex image processing and analysis functions used in analysis of remotely-sensed data are accomplished using sophisticated image analysis equipment. High cost of this equipment places many of these techniques beyond the means of most users. A new, more economical, video system capable of performing complex image analysis has now been developed. This report describes the functions, components, and operation of that system. Processing capability of the new video image analysis system includes many of the tasks previously accomplished with optical projectors and digital computers. Video capabilities include: color separation, color addition/subtraction, contrast stretch, dark level adjustment, density analysis, edge enhancement, scale matching, image mixing (addition and subtraction), image ratioing, and construction of false-color composite images. Rapid input of non-digital image data, instantaneous processing and display, relatively low initial cost, and low operating cost gives the video system a competitive advantage over digital equipment. Complex pre-processing, pattern recognition, and statistical analyses must still be handled through digital computer systems. The video system at the University of Wyoming has undergone extensive testing, comparison to other systems, and has been used successfully in practical applications ranging from analysis of x-rays and thin sections to production of color composite ratios of multispectral imagery. Potential applications are discussed including uranium exploration, petroleum exploration, tectonic studies, geologic mapping, hydrology sedimentology and petrography, anthropology, and studies on vegetation and wildlife habitat

  14. Video copy protection and detection framework (VPD) for e-learning systems

    Science.gov (United States)

    ZandI, Babak; Doustarmoghaddam, Danial; Pour, Mahsa R.

    2013-03-01

    This Article reviews and compares the copyright issues related to the digital video files, which can be categorized as contended based and Digital watermarking copy Detection. Then we describe how to protect a digital video by using a special Video data hiding method and algorithm. We also discuss how to detect the copy right of the file, Based on expounding Direction of the technology of the video copy detection, and Combining with the own research results, brings forward a new video protection and copy detection approach in terms of plagiarism and e-learning systems using the video data hiding technology. Finally we introduce a framework for Video protection and detection in e-learning systems (VPD Framework).

  15. Desain dan Implementasi Aplikasi Video Surveillance System Berbasis Web-SIG

    Directory of Open Access Journals (Sweden)

    I M.O. Widyantara

    2015-06-01

    Full Text Available Video surveillance system (VSS is an monitoring system based-on IP-camera. VSS implemented in live streaming and serves to observe and monitor a site remotely. Typically, IP- camera in the VSS has a management software application. However, for ad hoc applications, where the user wants to manage VSS independently, application management software has become ineffective. In the IP-camera installation spread over a large area, an administrator would be difficult to describe the location of the IP-camera. In addition, monitoring an area of IP- Camera will also become more difficult. By looking at some of the flaws in VSS, this paper has proposed a VSS application for easy monitoring of each IP Camera. Applications that have been proposed to integrate the concept of web-based geographical information system with the Google Maps API (Web-GIS. VSS applications built with smart features include maps ip-camera, live streaming of events, information on the info window and marker cluster. Test results showed that the application is able to display all the features built well

  16. General multiplex centralized fire-alarm display system

    International Nuclear Information System (INIS)

    Zhu Liqun; Chen Jinming

    2002-01-01

    The fire-alarm display system is developed, which can connect with each type of fire controllers produced in the factory and SIGMASYS controllers. It can display whole alarm information. The display system software gathers communication, database and multimedia, has functions of inspecting fire, showing alarm, storing data, searching information and so on. The drawing software lets the user expediently add, delete, move and modify fire detection or fire fighting facilities on the building floor maps. The graphic transform software lets the display use the vectorgraph produced by popular plotting software such as Auto CAD. The system software provides the administration function, such as log book of changing shift and managing workers etc.. The software executed on Windows 98 platform. The user interface is friendly and reliable in operation

  17. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Directory of Open Access Journals (Sweden)

    Chang Yao-Jen

    2002-01-01

    Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  18. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Science.gov (United States)

    Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting

    2002-12-01

    This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  19. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  20. Remote stereoscopic video play platform for naked eyes based on the Android system

    Science.gov (United States)

    Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng

    2014-11-01

    As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.

  1. A green-color portable waveguide eyewear display system

    Science.gov (United States)

    Xia, Lingbo; Xu, Ke; Wu, Zhengming; Hu, Yingtian; Li, Zhenzhen; Wang, Yongtian; Liu, Juan

    2013-08-01

    Waveguide display systems are widely used in various display fields, especially in head mounted display. Comparing with the traditional head mounted display system, this device dramatically reduce the size and mass. However, there are still several fatal problems such as high scatting, the cumbersome design and chromatic aberration that should be solved. We designed and fabricated a monochromatic portable eyewear display system consist of a comfortable eyewear device and waveguide system with two holographic gratings located on the substrate symmetrically. We record the gratings on the photopolymer medium with high efficiency and wavelength sensitivity. The light emitting from the micro-display is diffracted by the grating and trapped in the glass substrate by total internal reflection. The relationship between the diffraction efficiency and exposure value is studied and analyzed, and we fabricated the gratings with appropriate diffraction efficiency in a optimization condition. To avoid the disturbance of the stray light, we optimize the waveguide system numerically and perform the optical experiments. With this system, people can both see through the waveguide to obtain the information outside and catch the information from the micro display. After considering the human body engineering and industrial production, we design the structure in a compact and portable way. It has the advantage of small-type configuration and economic acceptable. It is believe that this kind of planar waveguide system is a potentially replaceable choice for the portable devices in future mobile communications.

  2. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    2000-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed, higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  3. System and method for improving video recorder performance in a search mode

    NARCIS (Netherlands)

    1991-01-01

    A method and apparatus wherein video images are recorded on a plurality of tracks of a tape such that, for playback in a search mode at a speed higher than the recording speed the displayed image will consist of a plurality of contiguous parts, some of the parts being read out from tracks each

  4. An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices

    Directory of Open Access Journals (Sweden)

    Li Houqiang

    2007-01-01

    Full Text Available With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.

  5. Power and Surveillance in Video Games

    Directory of Open Access Journals (Sweden)

    Héctor Puente Bienvenido

    2014-08-01

    Full Text Available In this article we explore the history of video games (focusing on multiplayer ones, from the perspective of power relationships and the ways in which authority has been excesiced by the game industry and game players over time. From a hierarchical system of power and domain to the increasing flatness of the current structure, we address the systems of control and surveillance. We will finish our display assessing the emergent forms of production and relationships between players and developers.

  6. 36 CFR 1194.24 - Video and multimedia products.

    Science.gov (United States)

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Video and multimedia products... Video and multimedia products. (a) All analog television displays 13 inches and larger, and computer... training and informational video and multimedia productions which support the agency's mission, regardless...

  7. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  8. Information Display: Considerations for Designing Modern Computer-Based Display Systems

    International Nuclear Information System (INIS)

    O'Hara, J.; Pirus, D.; Beltracchi, L.

    2003-01-01

    OAK- B135 To help nuclear utilities and suppliers design and implement plant information management systems and displays that provide accurate and timely information and require minimal navigation and interface management

  9. Video-speed electronic paper based on electrowetting

    Science.gov (United States)

    Hayes, Robert A.; Feenstra, B. J.

    2003-09-01

    In recent years, a number of different technologies have been proposed for use in reflective displays. One of the most appealing applications of a reflective display is electronic paper, which combines the desirable viewing characteristics of conventional printed paper with the ability to manipulate the displayed information electronically. Electronic paper based on the electrophoretic motion of particles inside small capsules has been demonstrated and commercialized; but the response speed of such a system is rather slow, limited by the velocity of the particles. Recently, we have demonstrated that electrowetting is an attractive technology for the rapid manipulation of liquids on a micrometre scale. Here we show that electrowetting can also be used to form the basis of a reflective display that is significantly faster than electrophoretic displays, so that video content can be displayed. Our display principle utilizes the voltage-controlled movement of a coloured oil film adjacent to a white substrate. The reflectivity and contrast of our system approach those of paper. In addition, we demonstrate a colour concept, which is intrinsically four times brighter than reflective liquid-crystal displays and twice as bright as other emerging technologies. The principle of microfluidic motion at low voltages is applicable in a wide range of electro-optic devices.

  10. Display of adenoregulin with a novel Pichia pastoris cell surface display system.

    Science.gov (United States)

    Ren, Ren; Jiang, Zhengbing; Liu, Meiyun; Tao, Xinyi; Ma, Yushu; Wei, Dongzhi

    2007-02-01

    Two Pichia pastoris cell surface display vectors were constructed. The vectors consisted of the flocculation functional domain of Flo1p with its own secretion signal sequence or the alpha-factor secretion signal sequence, a polyhistidine (6xHis) tag for detection, an enterokinase recognition site, and the insertion sites for target proteins. Adenoregulin (ADR) is a 33-amino-acid antimicrobial peptide isolated from Phyllomedusa bicolor skin. The ADR was expressed and displayed on the Pichia pastoris KM71 cell surface with the system reported. The displayed recombinant ADR fusion protein was detected by fluorescence microscopy and confocal laser scanning microscopy (CLSM). The antimicrobial activity of the recombinant adenoregulin was detected after proteolytic cleavage of the fusion protein on cell surface. The validity of the Pichia pastoris cell surface display vectors was proved by the displayed ADR.

  11. Video Conference System that Keeps Mutual Eye Contact Among Participants

    Directory of Open Access Journals (Sweden)

    Masahiko Yahagi

    2011-10-01

    Full Text Available A novel video conference system is developed. Suppose that three people A, B, and C attend the video conference, the proposed system enables eye contact among every pair. Furthermore, when B and C chat, A feels as if B and C were facing each other (eye contact seems to be kept among B and C. In the case of a triangle video conference, the respective video system is composed of a half mirror, two video cameras, and two monitors. Each participant watches other participants' images that are reflected by the half mirror. Cameras are set behind the half mirror. Since participants' image (face and the camera position are adjusted to be the same direction, eye contact is kept and conversation becomes very natural compared with conventional video conference systems where participants' eyes do not point to the other participant. When 3 participants sit at the vertex of an equilateral triangle, eyes can be kept even for the situation mentioned above (eye contact between B and C from the aspect of A. Eye contact can be kept not only for 2 or 3 participants but also any number of participants as far as they sit at the vertex of a regular polygon.

  12. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    International Nuclear Information System (INIS)

    Anderson, Robert J.

    2014-01-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  13. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  14. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Robotic and Security Systems Dept.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  15. An integrated acquisition, display, and analysis system

    International Nuclear Information System (INIS)

    Ahmad, T.; Huckins, R.J.

    1987-01-01

    The design goal of the ND9900/Genuie was to integrate a high performance data acquisition and display subsystem with a state-of-the-art 32-bit supermicrocomputer. This was achieved by integrating a Digital Equipment Corporation MicroVAX II CPU board with acquisition and display controllers via the Q-bus. The result is a tightly coupled processing and analysis system for Pulse Height Analysis and other applications. The system architecture supports distributed processing, so that acquisition and display functions are semi-autonomous, making the VAX concurrently available for applications programs

  16. Specialized video systems for use in underground storage tanks

    International Nuclear Information System (INIS)

    Heckendom, F.M.; Robinson, C.W.; Anderson, E.K.; Pardini, A.F.

    1994-01-01

    The Robotics Development Groups at the Savannah River Site and the Hanford site have developed remote video and photography systems for deployment in underground radioactive waste storage tanks at Department of Energy (DOE) sites as a part of the Office of Technology Development (OTD) program within DOE. Figure 1 shows the remote video/photography systems in a typical underground storage tank environment. Viewing and documenting the tank interiors and their associated annular spaces is an extremely valuable tool in characterizing their condition and contents and in controlling their remediation. Several specialized video/photography systems and robotic End Effectors have been fabricated that provide remote viewing and lighting. All are remotely deployable into and from the tank, and all viewing functions are remotely operated. Positioning all control components away from the facility prevents the potential for personnel exposure to radiation and contamination. Overview video systems, both monaural and stereo versions, include a camera, zoom lens, camera positioner, vertical deployment system, and positional feedback. Each independent video package can be inserted through a 100 mm (4 in.) diameter opening. A special attribute of these packages is their design to never get larger than the entry hole during operation and to be fully retrievable. The End Effector systems will be deployed on the large robotic Light Duty Utility Arm (LDUA) being developed by other portions of the OTD-DOE programs. The systems implement a multi-functional ''over the coax'' design that uses a single coaxial cable for all data and control signals over the more than 900 foot cable (or fiber optic) link

  17. Video auto stitching in multicamera surveillance system

    Science.gov (United States)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  18. Future of photorefractive based holographic 3D display

    Science.gov (United States)

    Blanche, P.-A.; Bablumian, A.; Voorakaranam, R.; Christenson, C.; Lemieux, D.; Thomas, J.; Norwood, R. A.; Yamamoto, M.; Peyghambarian, N.

    2010-02-01

    The very first demonstration of our refreshable holographic display based on photorefractive polymer was published in Nature early 20081. Based on the unique properties of a new organic photorefractive material and the holographic stereography technique, this display addressed a gap between large static holograms printed in permanent media (photopolymers) and small real time holographic systems like the MIT holovideo. Applications range from medical imaging to refreshable maps and advertisement. Here we are presenting several technical solutions for improving the performance parameters of the initial display from an optical point of view. Full color holograms can be generated thanks to angular multiplexing, the recording time can be reduced from minutes to seconds with a pulsed laser, and full parallax hologram can be recorded in a reasonable time thanks to parallel writing. We also discuss the future of such a display and the possibility of video rate.

  19. Optimization of video capturing and tone mapping in video camera systems

    NARCIS (Netherlands)

    Cvetkovic, S.D.

    2011-01-01

    Image enhancement techniques are widely employed in many areas of professional and consumer imaging, machine vision and computational imaging. Image enhancement techniques used in surveillance video cameras are complex systems involving controllable lenses, sensors and advanced signal processing. In

  20. Patterned Video Sensors For Low Vision

    Science.gov (United States)

    Juday, Richard D.

    1996-01-01

    Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.

  1. User interface design in safety parameter display systems

    International Nuclear Information System (INIS)

    Schultz, E.E. Jr.; Johnson, G.L.

    1988-01-01

    The extensive installation of computerized safety Parameter Display Systems (SPDSs) in nuclear power plants since the Three-Mile Island accident has enhanced plant safety. It has also raised new issues of how best to ensure an effective interface between human operators and the plant via computer systems. New developments in interface technologies since the current generation of SPDSs was installed can contribute to improving display interfaces. These technologies include new input devices, three-dimensional displays, delay indicators, and auditory displays. Examples of how they might be applied to improve current SPDSs are given. These examples illustrate how the new use interface technology could be applied to future nuclear plant displays

  2. Video Content Search System for Better Students Engagement in the Learning Process

    Directory of Open Access Journals (Sweden)

    Alanoud Alotaibi

    2014-12-01

    Full Text Available As a component of the e-learning educational process, content plays an essential role. Increasingly, the video-recorded lectures in e-learning systems are becoming more important to learners. In most cases, a single video-recorded lecture contains more than one topic or sub-topic. Therefore, to enable learners to find the desired topic and reduce learning time, e-learning systems need to provide a search capability for searching within the video content. This can be accomplished by enabling learners to identify the video or portion that contains a keyword they are looking for. This research aims to develop Video Content Search system to facilitate searching in educational videos and its contents. Preliminary results of an experimentation were conducted on a selected university course. All students needed a system to avoid time-wasting problem of watching long videos with no significant benefit. The statistics showed that the number of learners increased during the experiment. Future work will include studying impact of VCS system on students’ performance and satisfaction.

  3. Web Based Room Monitoring System Using Webcam

    Directory of Open Access Journals (Sweden)

    Tole Sutikno

    2008-04-01

    Full Text Available A security has become very important along with the increasing number of crime cases. If some security system fails, there is a need for a mechanism that capable in recording the criminal act. Therefore, it can be used for investigation purpose of the authorities. The objective of this research is to develop a security system using video streaming that able to monitor in real-time manner, display movies in a browser, and record a video as triggered by a sensor. This monitoring system comprises of two security level camera as a video recorder of special events based on infrared sensor that is connected to a microcontroller via serial communication and camera as a real-time room monitor. The hardware system consists of infrared sensor circuit to detect special events that is serially communicated to an AT89S51 microcontroller that controls the system to perform recording process, and the software system consists of a server that displaying video streaming in a webpage and a video recorder. The software for video recording and server camera uses Visual Basic 6.0 and for video streaming uses PHP 5.1.6. As the result, the system can be used to record special events that it is wanted, and can displayed video streaming in a webpage using LAN infrastructure.

  4. Hybrid compression of video with graphics in DTV communication systems

    OpenAIRE

    Schaar, van der, M.; With, de, P.H.N.

    2000-01-01

    Advanced broadcast manipulation of TV sequences and enhanced user interfaces for TV systems have resulted in an increased amount of pre- and post-editing of video sequences, where graphical information is inserted. However, in the current broadcasting chain, there are no provisions for enabling an efficient transmission/storage of these mixed video and graphics signals and, at this emerging stage of DTV systems, introducing new standards is not desired. Nevertheless, in the professional video...

  5. Facial expression system on video using widrow hoff

    Science.gov (United States)

    Jannah, M.; Zarlis, M.; Mawengkang, H.

    2018-03-01

    Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.

  6. A Secure and Robust Object-Based Video Authentication System

    Directory of Open Access Journals (Sweden)

    He Dajun

    2004-01-01

    Full Text Available An object-based video authentication system, which combines watermarking, error correction coding (ECC, and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI.

  7. A Retrieval Optimized Surveillance Video Storage System for Campus Application Scenarios

    Directory of Open Access Journals (Sweden)

    Shengcheng Ma

    2018-01-01

    Full Text Available This paper investigates and analyzes the characteristics of video data and puts forward a campus surveillance video storage system with the university campus as the specific application environment. Aiming at the challenge that the content-based video retrieval response time is too long, the key-frame index subsystem is designed. The key frame of the video can reflect the main content of the video. Extracted from the video, key frames are associated with the metadata information to establish the storage index. The key-frame index is used in lookup operations while querying. This method can greatly reduce the amount of video data reading and effectively improves the query’s efficiency. From the above, we model the storage system by a stochastic Petri net (SPN and verify the promotion of query performance by quantitative analysis.

  8. Bandwidth Optimization On Design Of Visual Display Information System Based Networking At Politeknik Negeri Bali

    Science.gov (United States)

    Sudiartha, IKG; Catur Bawa, IGNB

    2018-01-01

    Information can not be separated from the social life of the community, especially in the world of education. One of the information fields is academic calendar information, activity agenda, announcement and campus activity news. In line with technological developments, text-based information is becoming obsolete. For that need creativity to present information more quickly, accurately and interesting by exploiting the development of digital technology and internet. In this paper will be developed applications for the provision of information in the form of visual display, applied to computer network system with multimedia applications. Network-based applications provide ease in updating data through internet services, attractive presentations with multimedia support. The application “Networking Visual Display Information Unit” can be used as a medium that provides information services for students and academic employee more interesting and ease in updating information than the bulletin board. The information presented in the form of Running Text, Latest Information, Agenda, Academic Calendar and Video provide an interesting presentation and in line with technological developments at the Politeknik Negeri Bali. Through this research is expected to create software “Networking Visual Display Information Unit” with optimal bandwidth usage by combining local data sources and data through the network. This research produces visual display design with optimal bandwidth usage and application in the form of supporting software.

  9. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    Science.gov (United States)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  10. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    International Nuclear Information System (INIS)

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  11. Projection display industry market and technology trends

    Science.gov (United States)

    Castellano, Joseph A.; Mentley, David E.

    1995-04-01

    The projection display industry is diverse, embracing a variety of technologies and applications. In recent years, there has been a high level of interest in projection displays, particularly those using LCD panels or light valves because of the difficulty in making large screen, direct view displays. Many developers feel that projection displays will be the wave of the future for large screen HDTV (high-definition television), penetrating the huge existing market for direct view CRT-based televisions. Projection displays can have the images projected onto a screen either from the rear or the front; the main characteristic is their ability to be viewed by more than one person. In addition to large screen home television receivers, there are numerous other uses for projection displays including conference room presentations, video conferences, closed circuit programming, computer-aided design, and military command/control. For any given application, the user can usually choose from several alternative technologies. These include CRT front or rear projectors, LCD front or rear projectors, LCD overhead projector plate monitors, various liquid or solid-state light valve projectors, or laser-addressed systems. The overall worldwide market for projection information displays of all types and for all applications, including home television, will top DOL4.6 billion in 1995 and DOL6.45 billion in 2001.

  12. ADAPTIVE STREAMING OVER HTTP (DASH UNTUK APLIKASI VIDEO STREAMING

    Directory of Open Access Journals (Sweden)

    I Made Oka Widyantara

    2015-12-01

    Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG-DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views

  13. Real-time geo-referenced video mosaicking with the MATISSE system

    DEFF Research Database (Denmark)

    Vincent, Anne-Gaelle; Pessel, Nathalie; Borgetto, Manon

    This paper presents the MATISSE system: Mosaicking Advanced Technologies Integrated in a Single Software Environment. This system aims at producing in-line and off-line geo-referenced video mosaics of seabed given a video input and navigation data. It is based upon several techniques of image...

  14. Video enhancement of dental radiographic films

    International Nuclear Information System (INIS)

    Van Dis, M.L.; Beck, F.M.; Miles, D.A.

    1989-01-01

    A prototype video image display system, a real-time analog enhancer (RAE), was compared to conventional viewing conditions with the use of nonscreen dental films. When medium optical density films were evaluated, there was no significant difference in the number of radiographic details detected. Conventional viewing conditions allowed perception of more details when dark films were evaluated; however, the RAE unit allowed the perception of more details when light films were viewed

  15. In-Network Adaptation of Video Streams Using Network Processors

    Directory of Open Access Journals (Sweden)

    Mohammad Shorfuzzaman

    2009-01-01

    problem can be addressed, near the network edge, by applying dynamic, in-network adaptation (e.g., transcoding of video streams to meet available connection bandwidth, machine characteristics, and client preferences. In this paper, we extrapolate from earlier work of Shorfuzzaman et al. 2006 in which we implemented and assessed an MPEG-1 transcoding system on the Intel IXP1200 network processor to consider the feasibility of in-network transcoding for other video formats and network processor architectures. The use of “on-the-fly” video adaptation near the edge of the network offers the promise of simpler support for a wide range of end devices with different display, and so forth, characteristics that can be used in different types of environments.

  16. Data displays for multi-detector monitoring systems

    International Nuclear Information System (INIS)

    Barnes, R.C.M.

    1978-03-01

    Extensive installations of sensors are used for environmental surveillance of radiological hazards, fire, etc. The data from such arrays of detectors can be collected by data processing systems which generate appropriate supervisory displays and records. This paper reviews facilities and physical configurations of computer-based display systems, with particular reference to radiological protection schemes. The general principles are relevant to other fields of application. (author)

  17. Measurement of the exposure rate due to low energy x-rays emitted from video display terminals

    International Nuclear Information System (INIS)

    Campos, L.L.

    1988-01-01

    Thermoluminescent dosimeters of CaSO 4 :Dy have been used to measure the low energy x-rays emitted from Video Display Terminals (VDTs). For each terminal, three points were measured with five dosimeters at each point. The points were at distances of 5 and 50 cm in front of the screen and at 65 cm with an angle of approximately 50 0 . The last two positions approximate to positions of the lenses of the eye and the gonads respectively. A survey of 50 VDTs at a distance of 5 cm from the screen resulted in exposure rates nearly fifteen times below the exposure rate of 0.5 mR h -1 (0.129 μC kg -1 h -1 ) which is the limit recommended by the International Atomic Energy Agency (IAEA), Safety Series No. 9 (1967) Basic Safety Standards for Radiation Protection. (author)

  18. Processing Decoded Video for Backlight Dimming

    DEFF Research Database (Denmark)

    Burini, Nino; Korhonen, Jari

    rendition of the signals, particularly in the case of LCDs with dynamic local backlight. This thesis shows that it is possible to model LCDs with dynamic backlight to design algorithms that improve the visual quality of 2D and 3D content, and that digital video coding artifacts like blocking or ringing can......Quality of digital image and video signals on TV screens is aected by many factors, including the display technology and compression standards. An accurate knowledge of the characteristics of the display andof the video signals can be used to develop advanced algorithms that improve the visual...... be reduced with post-processing. LCD screens with dynamic local backlight are modeled in their main aspects, like pixel luminance, light diusion and light perception. Following the model, novel algorithms based on optimization are presented and extended, then reduced in complexity, to produce backlights...

  19. Function-oriented display system: background and first prototypes

    International Nuclear Information System (INIS)

    Andresen, Gisle; Friberg, Maarten; Teigen, Arild; Pirus, Dominique

    2004-04-01

    The objective of the function-oriented displays and alarm project is to design, implement and evaluate Human System Interfaces (HSI) based on a function-oriented design philosophy. Function-oriented design is an approach for designing HSIs where the plant's functions, identified through a function analysis, are used for determining the content, organisation, and management of displays. The project has used the 'FITNESS approach', originally developed by EDF in France, as a starting point. FITNESS provides an integrated display system consisting of process operating displays, operating procedures, alarms and trend displays - all based on a functional decomposition of the plant. So far, two prototypes have been implemented on the FRESH PWR simulator in HAMMLAB. The first prototype focused on the condensate pumps. Three process operating displays representing functions at different levels of the functional hierarchy were implemented. Computerised startup and shutdown procedures for the condensate pumps function were also implemented. In the second prototype, the scope was increased to cover the main feedwater system. The displays of the first prototype were redesigned and additional displays were created. In conclusion, the first phase of the project has been completed successfully, and we are now ready to enter the second phase. In the second phase, the scope of the prototype will be increased further to include the steam-generators and function-oriented disturbance operating procedures. The prototype will be evaluated in a user test conducted later in 2004. (Author)

  20. A video wireless capsule endoscopy system powered wirelessly: design, analysis and experiment

    International Nuclear Information System (INIS)

    Pan, Guobing; Chen, Jiaoliao; Xin, Wenhui; Yan, Guozheng

    2011-01-01

    Wireless capsule endoscopy (WCE), as a relatively new technology, has brought about a revolution in the diagnosis of gastrointestinal (GI) tract diseases. However, the existing WCE systems are not widely applied in clinic because of the low frame rate and low image resolution. A video WCE system based on a wireless power supply is developed in this paper. This WCE system consists of a video capsule endoscope (CE), a wireless power transmission device, a receiving box and an image processing station. Powered wirelessly, the video CE has the abilities of imaging the GI tract and transmitting the images wirelessly at a frame rate of 30 frames per second (f/s). A mathematical prototype was built to analyze the power transmission system, and some experiments were performed to test the capability of energy transferring. The results showed that the wireless electric power supply system had the ability to transfer more than 136 mW power, which was enough for the working of a video CE. In in vitro experiments, the video CE produced clear images of the small intestine of a pig with the resolution of 320 × 240, and transmitted NTSC format video outside the body. Because of the wireless power supply, the video WCE system with high frame rate and high resolution becomes feasible, and provides a novel solution for the diagnosis of the GI tract in clinic

  1. Guide to Synchronization of Video Systems to IRIG Timing

    Science.gov (United States)

    1992-07-01

    and industry. 1-2 CHAPTER 2 SYNCHRONISATION Before delving into the details of synchronization , a review is needed of the reasons for synchronizing ... Synchronization of Video Systems to IRIG Timing Optical Systems Group Range Commanders Council White Sands Missile Range, NM 88002-5110 RCC Document 456-92 Range...This document addresses a broad field of video synchronization to IRIG timing with emphasis on color synchronization . This document deals with

  2. Spatiotemporal video deinterlacing using control grid interpolation

    Science.gov (United States)

    Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin

    2015-03-01

    With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.

  3. Security alarm communication and display systems development

    International Nuclear Information System (INIS)

    Waddoups, I.G.

    1990-01-01

    Sandia National Laboratories (SNL) has, as lead Department of Energy (DOE) physical security laboratory, developed a variety of alarm communication and display systems for DOE and Department of Defense (DOD) facilities. This paper briefly describes some of the systems developed and concludes with a discussion of technology relevant to those currently designing, developing, implementing, or procuring such a system. Development activities and the rapid evolution of computers over the last decade have resulted in a broad variety of capabilities to support most security system communication and display needs. The major task in selecting a system is becoming familiar with these capabilities and finding the best match to a specific need

  4. Informatics in Radiology (infoRAD): mobile wireless DICOM server system and PDA with high-resolution display: feasibility of group work for radiologists.

    Science.gov (United States)

    Nakata, Norio; Kandatsu, Susumu; Suzuki, Naoki; Fukuda, Kunihiko

    2005-01-01

    A novel mobile system has been developed for use by radiologists in managing Digital Imaging and Communications in Medicine (DICOM) image data. The system consists of a mobile DICOM server (MDS) and personal digital assistants (PDAs), including a Linux PDA with a video graphics array (VGA) display (307,200 pixels, 3.7 inches). The MDS weighs 410 g, has a 60-GB hard disk drive and a built-in wireless local area network (LAN) access point, and supports a DICOM server (Central Test Node). The Linux-based MDS can be accessed with personal computers (PCs) and PDAs by means of a wireless or wired LAN, and client-server communications can be established at any time. DICOM images can be displayed by using any PDA or PC by means of a Web browser. Simultaneous access to the MDS is possible for multiple authenticated users. With most PDAs, image compression is necessary for complete display of DICOM images; however, the VGA screen can display a 512 x 512-pixel DICOM image almost in its entirety. This wireless system allows efficient management of heavy loads of lossless DICOM image data and will be useful for collaborative work by radiologists in education, conferences, and research.

  5. An Advanced Diagnostic Display for Core Protection Calculator System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji-Hyeon; Jeong, See-Chae; Sohn, Se-Do [Korea Power Engineering Company, Daejeon (Korea, Republic of)

    2008-10-15

    The main purpose of a Nuclear Power Plant Instrumentation and Control (I and C) Display System is to provide operator's interface for I and C systems. The CPCS display(Shin-Kori 1 and 2) provides operators with 1) plant monitoring values of field input and algorithm variables that reflect the reactor core conditions, 2) operation values that operators can change and 3) CPCS status. It will be an optimal case if operators can understand the plant (including CPCS itself) condition intuitively with the displayed values but it is not easy in CPCS. For example, if the CPCS Channel Trouble light is lit, operators need some amount of time to investigate what caused the trouble light because there are more than hundred causes that can generate the channel trouble. If a Display supports diagnostic information that shows what cause the displayed alarms, it will greatly help operators in easy understanding the CPCS status. To provide these diagnostic information, this paper suggests an active self-explanatory display mechanism. This self-explanatory diagnostic display mechanism utilizes an ontology in XML that describes parent child, sibling relationships of display variables, through which in-depth, in-breadth diagnostic tracking is possible. This paper consists of two parts. First, the key features of CPCS Flat Panel Display System (FPDS) are described. Second, the features of active self explanatory diagnostic display are discussed.

  6. An Advanced Diagnostic Display for Core Protection Calculator System

    International Nuclear Information System (INIS)

    Kim, Ji-Hyeon; Jeong, See-Chae; Sohn, Se-Do

    2008-01-01

    The main purpose of a Nuclear Power Plant Instrumentation and Control (I and C) Display System is to provide operator's interface for I and C systems. The CPCS display(Shin-Kori 1 and 2) provides operators with 1) plant monitoring values of field input and algorithm variables that reflect the reactor core conditions, 2) operation values that operators can change and 3) CPCS status. It will be an optimal case if operators can understand the plant (including CPCS itself) condition intuitively with the displayed values but it is not easy in CPCS. For example, if the CPCS Channel Trouble light is lit, operators need some amount of time to investigate what caused the trouble light because there are more than hundred causes that can generate the channel trouble. If a Display supports diagnostic information that shows what cause the displayed alarms, it will greatly help operators in easy understanding the CPCS status. To provide these diagnostic information, this paper suggests an active self-explanatory display mechanism. This self-explanatory diagnostic display mechanism utilizes an ontology in XML that describes parent child, sibling relationships of display variables, through which in-depth, in-breadth diagnostic tracking is possible. This paper consists of two parts. First, the key features of CPCS Flat Panel Display System (FPDS) are described. Second, the features of active self explanatory diagnostic display are discussed

  7. A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.

    Science.gov (United States)

    Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian

    2016-04-01

    Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable

  8. User-assisted video segmentation system for visual communication

    Science.gov (United States)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  9. The ASDEX upgrade digital video processing system for real-time machine protection

    Energy Technology Data Exchange (ETDEWEB)

    Drube, Reinhard, E-mail: reinhard.drube@ipp.mpg.de [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Neu, Gregor [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany); Cole, Richard H.; Lüddecke, Klaus [Unlimited Computer Systems GmbH, Seeshaupterstr. 15, 82393 Iffeldorf (Germany); Lunt, Tilmann; Herrmann, Albrecht [Max-Planck-Institut für Plasmaphysik, EURATOM Association, Boltzmannstr. 2, 85748 Garching (Germany)

    2013-11-15

    Highlights: • We present the Real-Time Video diagnostic system of ASDEX Upgrade. • We show the implemented image processing algorithms for machine protection. • The way to achieve a robust operating multi-threading Real-Time system is described. -- Abstract: This paper describes the design, implementation, and operation of the Video Real-Time (VRT) diagnostic system of the ASDEX Upgrade plasma experiment and its integration with the ASDEX Upgrade Discharge Control System (DCS). Hot spots produced by heating systems erroneously or accidentally hitting the vessel walls, or from objects in the vessel reaching into the plasma outer border, show up as bright areas in the videos during and after the reaction. A system to prevent damage to the machine by allowing for intervention in a running discharge of the experiment was proposed and implemented. The VRT was implemented on a multi-core real-time Linux system. Up to 16 analog video channels (color and b/w) are acquired and multiple regions of interest (ROI) are processed on each video frame. Detected critical states can be used to initiate appropriate reactions – e.g. gracefully terminate the discharge. The system has been in routine operation since 2007.

  10. Interactive display of molecular models using a microcomputer system

    Science.gov (United States)

    Egan, J. T.; Macelroy, R. D.

    1980-01-01

    A simple, microcomputer-based, interactive graphics display system has been developed for the presentation of perspective views of wire frame molecular models. The display system is based on a TERAK 8510a graphics computer system with a display unit consisting of microprocessor, television display and keyboard subsystems. The operating system includes a screen editor, file manager, PASCAL and BASIC compilers and command options for linking and executing programs. The graphics program, written in USCD PASCAL, involves the centering of the coordinate system, the transformation of centered model coordinates into homogeneous coordinates, the construction of a viewing transformation matrix to operate on the coordinates, clipping invisible points, perspective transformation and scaling to screen coordinates; commands available include ZOOM, ROTATE, RESET, and CHANGEVIEW. Data file structure was chosen to minimize the amount of disk storage space. Despite the inherent slowness of the system, its low cost and flexibility suggests general applicability.

  11. Perceptual tools for quality-aware video networks

    Science.gov (United States)

    Bovik, A. C.

    2014-01-01

    Monitoring and controlling the quality of the viewing experience of videos transmitted over increasingly congested networks (especially wireless networks) is a pressing problem owing to rapid advances in video-centric mobile communication and display devices that are straining the capacity of the network infrastructure. New developments in automatic perceptual video quality models offer tools that have the potential to be used to perceptually optimize wireless video, leading to more efficient video data delivery and better received quality. In this talk I will review key perceptual principles that are, or could be used to create effective video quality prediction models, and leading quality prediction models that utilize these principles. The goal is to be able to monitor and perceptually optimize video networks by making them "quality-aware."

  12. Operation quality assessment model for video conference system

    Science.gov (United States)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  13. Practical system for generating digital mixed reality video holograms.

    Science.gov (United States)

    Song, Joongseok; Kim, Changseob; Park, Hanhoon; Park, Jong-Il

    2016-07-10

    We propose a practical system that can effectively mix the depth data of real and virtual objects by using a Z buffer and can quickly generate digital mixed reality video holograms by using multiple graphic processing units (GPUs). In an experiment, we verify that real objects and virtual objects can be merged naturally in free viewing angles, and the occlusion problem is well handled. Furthermore, we demonstrate that the proposed system can generate mixed reality video holograms at 7.6 frames per second. Finally, the system performance is objectively verified by users' subjective evaluations.

  14. Aspects of radiation effects due to visual display units at work

    International Nuclear Information System (INIS)

    Vana, N.

    1988-01-01

    The introduction and acceptance of video display units at work have led to a huge flood of information, rumours, and half-truths about those units. As the population became increasingly sensitized to 'radioactive radiation', there was, and in part still is, a tendency to consider particularly effects of unclear origin, first of all ionizing radiation and later on also non-ionizing radiation, as the main threat from video display units at work. Such important issuses as ergonomics, stress load, visual stress load, and social hygiene are often effaced by the question for 'the radiation load from visual display units'. The paper is an attempt to deal with aspects of radiation effects of visual display units at work. The discussion also extends to hazards, respectively the 'radiation environment', at the site of the visual display unit. (orig./DG) [de

  15. A real-time remote video streaming platform for ultrasound imaging.

    Science.gov (United States)

    Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel

    2016-08-01

    Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.

  16. Utilization of KSC Present Broadband Communications Data System for Digital Video Services

    Science.gov (United States)

    Andrawis, Alfred S.

    2002-01-01

    This report covers a visibility study of utilizing present KSC broadband communications data system (BCDS) for digital video services. Digital video services include compressed digital TV delivery and video-on-demand. Furthermore, the study examines the possibility of providing interactive video on demand to desktop personal computers via KSC computer network.

  17. Meibomian gland dysfunction and ocular discomfort in video display terminal workers.

    Science.gov (United States)

    Fenga, C; Aragona, P; Cacciola, A; Spinella, R; Di Nola, C; Ferreri, F; Rania, L

    2008-01-01

    Meibomian gland dysfunction (MGD) is one of the most common ocular disorders encountered in clinical practice. The clinical manifestations of MGD are related to the changes in the tear film and ocular surface with symptoms of ocular discomfort. In recent years, many surveys have evaluated symptoms associated with the use of Video Display Terminals (VDT), and VDT use is recognized as a risk factor for eye discomfort. The aim of the present study was to determine if the presence of MGD contributes to the signs and symptoms of ocular discomfort during the use of VDT. In course of a routine health surveillance programme, a group of 70 subjects fulfilled the inclusion criteria and responded to a questionnaire about symptoms of ocular discomfort. The following ocular tests were performed: tear break-up time, fluorescein corneal stain, and basal tear secretion test. A total of 52 subjects out of 70 (74.3%) had MGD. A statistically significant correlation between the symptoms of ocular discomfort and hours spent on VDT work was observed in the total population (r=0.358; P=0.002; 95% CI 0.13-0.54) and in the group of subjects with MGD (r=0.365; P=0.009; 95% CI 0.103-0.58). Such correlation was not shown in subjects without MGD. The high prevalence of MGD among the subjects with symptoms of ocular discomfort suggests that this diagnosis should be considered when occupational health practitioners encounter ocular complaints among VDT operators. It appears that MGD can contribute to the development of ocular discomfort in VDT operators.

  18. Design and Implementation of a Wireless Message Display System

    Directory of Open Access Journals (Sweden)

    M. U. M. Bakura

    2016-08-01

    Full Text Available The technology of displaying message is an important part of communication and advertisement. In recent times, Wireless communication has announced its arrival on big stage and the world is going with Smartphone technology. This work describes the design and implementation of a microcontroller based messaging display system. The messaging display system will be interfaced with an android application which will then be used to display information from the comfort of one‘s phone to an LCD screen using the Bluetooth application interface. The work employs the use of an ATMEGA328p Microcontroller mounted on an Arduino board, a Bluetooth Module (HC-06 and an LCD screen. Most of these electronic display systems were using wired cable connections, the Bluetooth technology used in this work is aimed at solving the problem of wired cable connections.The microcontroller provides all the functionality of the display notices and wireless control. A desired text message from a mobile phone is sent via android mobile application to the Bluetooth module located at the receiving end. The Mobile Application was created using online software called App Inventor. When the entire system was connected and tested, it functioned as designed without any noticeable problems. The Bluetooth module responded to commands being sent from the android application appropriately and in a timely manner. The system was able to display 80 characters on the 4 x 20 LCD within the range of 10m as designated by the Bluetooth datasheet.

  19. Advanced image display systems in radiology

    International Nuclear Information System (INIS)

    Wendler, T.

    1987-01-01

    Advanced image display systems for the fully digital diagnostic imaging departments of the future will be far more than simple replacements of the traditional film-viewing equipment. The new capabilities of very high resolution and highly dynamic displays offer a userfriendly and problem-oriented way of image interpretation. Advanced harware-, software- and human-machine interaction-concepts have been outlined. A scenario for a future way of handling and displaying images, reflecting a new image viewing paradigm in radiology is sketched which has been realized in an experimental image workstation model in the laboratory which, despite its technical complexity, offers a consistent strategy for fast and convenient interaction with image objects. The perspective of knowledge based techniques for workstation control software with object-oriented programming environments and user- and task-adaptive behavior leads to more advanced display properties and a new quality of userfriendliness. 2 refs.; 5 figs

  20. User interface using a 3D model for video surveillance

    Science.gov (United States)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  1. Risk analysis of a video-surveillance system

    NARCIS (Netherlands)

    Rothkrantz, L.; Lefter, I.

    2011-01-01

    The paper describes a surveillance system of cameras installed at lamppost of a military area. The surveillance system has been designed to detect unwanted visitors or suspicious behaviors. The area is composed of streets, building blocks and surrounded by gates and water. The video recordings are

  2. A novel video recommendation system based on efficient retrieval of human actions

    Science.gov (United States)

    Ramezani, Mohsen; Yaghmaee, Farzin

    2016-09-01

    In recent years, fast growth of online video sharing eventuated new issues such as helping users to find their requirements in an efficient way. Hence, Recommender Systems (RSs) are used to find the users' most favorite items. Finding these items relies on items or users similarities. Though, many factors like sparsity and cold start user impress the recommendation quality. In some systems, attached tags are used for searching items (e.g. videos) as personalized recommendation. Different views, incomplete and inaccurate tags etc. can weaken the performance of these systems. Considering the advancement of computer vision techniques can help improving RSs. To this end, content based search can be used for finding items (here, videos are considered). In such systems, a video is taken from the user to find and recommend a list of most similar videos to the query one. Due to relating most videos to humans, we present a novel low complex scalable method to recommend videos based on the model of included action. This method has recourse to human action retrieval approaches. For modeling human actions, some interest points are extracted from each action and their motion information are used to compute the action representation. Moreover, a fuzzy dissimilarity measure is presented to compare videos for ranking them. The experimental results on HMDB, UCFYT, UCF sport and KTH datasets illustrated that, in most cases, the proposed method can reach better results than most used methods.

  3. Spacesuit Data Display and Management System

    Science.gov (United States)

    Hall, David G.; Sells, Aaron; Shah, Hemal

    2009-01-01

    A prototype embedded avionics system has been designed for the next generation of NASA extra-vehicular-activity (EVA) spacesuits. The system performs biomedical and other sensor monitoring, image capture, data display, and data transmission. An existing NASA Phase I and II award winning design for an embedded computing system (ZIN vMetrics - BioWATCH) has been modified. The unit has a reliable, compact form factor with flexible packaging options. These innovations are significant, because current state-of-the-art EVA spacesuits do not provide capability for data displays or embedded data acquisition and management. The Phase 1 effort achieved Technology Readiness Level 4 (high fidelity breadboard demonstration). The breadboard uses a commercial-grade field-programmable gate array (FPGA) with embedded processor core that can be upgraded to a space-rated device for future revisions.

  4. Video Surveillance using a Multi-Camera Tracking and Fusion System

    OpenAIRE

    Zhang , Zhong; Scanlon , Andrew; Yin , Weihong; Yu , Li; Venetianer , Péter L.

    2008-01-01

    International audience; Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems are being utilized in a wide range of applications. In most cases, even in multi-camera installations, the video is processed independently in each feed. This paper describes a system that fuses tracking information from multiple cameras, thus vastly expanding its capabilities. The fusion relies on all cameras being calibrated to a site map, while the individual sensors remain lar...

  5. Video x-ray progressive scanning: new technique for decreasing x-ray exposure without decreasing image quality during cardiac catheterization

    International Nuclear Information System (INIS)

    Holmes, D.R. Jr.; Bove, A.A.; Wondrow, M.A.; Gray, J.E.

    1986-01-01

    A newly developed video x-ray progressive scanning system improves image quality, decreases radiation exposure, and can be added to any pulsed fluoroscopic x-ray system using a video display without major system modifications. With use of progressive video scanning, the radiation entrance exposure rate measured with a vascular phantom was decreased by 32 to 53% in comparison with a conventional fluoroscopic x-ray system. In addition to this substantial decrease in radiation exposure, the quality of the image was improved because of less motion blur and artifact. Progressive video scanning has the potential for widespread application to all pulsed fluoroscopic x-ray systems. Use of this technique should make cardiac catheterization procedures and all other fluoroscopic procedures safer for the patient and the involved medical and paramedical staff

  6. Integrated Display and Environmental Awareness System - System Architecture Definition

    Science.gov (United States)

    Doule, Ondrej; Miranda, David; Hochstadt, Jake

    2017-01-01

    The Integrated Display and Environmental Awareness System (IDEAS) is an interdisciplinary team project focusing on the development of a wearable computer and Head Mounted Display (HMD) based on Commercial-Off-The-Shelf (COTS) components for the specific application and needs of NASA technicians, engineers and astronauts. Wearable computers are on the verge of utilization trials in daily life as well as industrial environments. The first civil and COTS wearable head mounted display systems were introduced just a few years ago and they probed not only technology readiness in terms of performance, endurance, miniaturization, operability and usefulness but also maturity of practice in perspective of a socio-technical context. Although the main technical hurdles such as mass and power were addressed as improvements on the technical side, the usefulness, practicality and social acceptance were often noted on the side of a broad variety of humans' operations. In other words, although the technology made a giant leap, its use and efficiency still looks for the sweet spot. The first IDEAS project started in January 2015 and was concluded in January 2017. The project identified current COTS systems' capability at minimum cost and maximum applicability and brought about important strategic concepts that will serve further IDEAS-like system development.

  7. Evaluating the image quality of Closed Circuit Television magnification systems versus a head-mounted display for people with low vision. .

    Science.gov (United States)

    Lin, Chern Sheng; Jan, Hvey-An; Lay, Yun-Long; Huang, Chih-Chia; Chen, Hsien-Tse

    2014-01-01

    In this research, image analysis was used to optimize the visual output of a traditional Closed Circuit Television (CCTV) magnifying system and a head-mounted display (HMD) for people with low vision. There were two purposes: (1) To determine the benefit of using an image analysis system to customize image quality for a person with low vision, and (2) to have people with low vision evaluate a traditional CCTV magnifier and an HMD, each customized to the user's needs and preferences. A CCTV system can electronically alter images by increasing the contrast, brightness, and magnification for the visually disabled when they are reading texts and pictures. The test methods was developed to evaluate and customize a magnification system for persons with low vision. The head-mounted display with CCTV was used to obtain better depth of field and a higher modulation transfer function from the video camera. By sensing the parameters of the environment (e.g., ambient light level, etc.) and collecting the user's specific characteristics, the system could make adjustments according to the user's needs, thus allowing the visually disabled to read more efficiently.

  8. Qinshan plant display system: experience to date

    International Nuclear Information System (INIS)

    Bin, L.; Jiangdong, Y.; Weili, C.; Haidong, W.; Wangtian, L.; Lockwood, R.; Doucet, R.; Trask, D.; Judd, R.

    2004-01-01

    The two CANDU 6 units operated by the Third Qinshan Nuclear Power Corporation (TQNPC) include, as part of a control centre upgrade, a new plant display system (PDS). The PDS provides plant operators with new display and monitoring functionality designed to compliment the DCC capability. It includes new overview and trend displays (e.g., critical safety parameter monitor and user-defined trends), and enhanced annunciation based on AECL's Computerized Alarm Message List System (CAMLS) including an alarm interrogation capability. This paper presents a review of operating experience gained since the PDS was commissioned more than three years ago. It includes feedback provided by control room operators and trainers, PDS maintainers, and AECL development and support staff. It also includes an overview of improvements implemented since the PDS and suggestions for the future enhancements. (author)

  9. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  10. Near real-time bi-planar fluoroscopic tracking system for the video tumor fighter

    International Nuclear Information System (INIS)

    Lawson, M.A.; Wika, K.G.; Gillies, G.T.; Ritter, R.C.

    1991-01-01

    The authors have developed software capable of the three-dimensional tracking of objects in the brain volume, and the subsequent overlaying of an image of the object onto previously obtained MR or CT scans. This software has been developed for use with the Magnetic Stereotaxis System (MSS), also called the Video Tumor Fighter (VTF). The software was written for s Sun 4/110 SPARC workstation with an ANDROX ICS-400 image processing card installed to manage this task. At present, the system uses input from two orthogonally- oriented, visible-light cameras and simulated scene to determine the three-dimensional position of the object of interest. The coordinates are then transformed into MR or CT coordinates and an image of the object is displayed in the appropriate intersecting MR slice on a computer screen. This paper describes the tracking algorithm and discusses how it was implemented in software. The system's hardware is also described. The limitations of the present system are discussed and plans for incorporating bi-planar, x-ray fluoroscopy are presented

  11. Evaluation of video detection systems, volume 1 : effects of configuration changes in the performance of video detection systems.

    Science.gov (United States)

    2009-10-01

    The effects of modifying the configuration of three video detection (VD) systems (Iteris, Autoscope, and Peek) : are evaluated in daytime and nighttime conditions. Four types of errors were used: false, missed, stuck-on, and : dropped calls. The thre...

  12. A practical implementation of free viewpoint video system for soccer games

    Science.gov (United States)

    Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki

    2015-03-01

    In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.

  13. Feasibility of video codec algorithms for software-only playback

    Science.gov (United States)

    Rodriguez, Arturo A.; Morse, Ken

    1994-05-01

    Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.

  14. A video imaging system and related control hardware for nuclear safeguards surveillance applications

    International Nuclear Information System (INIS)

    Whichello, J.V.

    1987-03-01

    A novel video surveillance system has been developed for safeguards applications in nuclear installations. The hardware was tested at a small experimental enrichment facility located at the Lucas Heights Research Laboratories. The system uses digital video techniques to store, encode and transmit still television pictures over the public telephone network to a receiver located in the Australian Safeguards Office at Kings Cross, Sydney. A decoded, reconstructed picture is then obtained using a second video frame store. A computer-controlled video cassette recorder is used automatically to archive the surveillance pictures. The design of the surveillance system is described with examples of its operation

  15. 3D video coding: an overview of present and upcoming standards

    Science.gov (United States)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  16. Realization on the interactive remote video conference system based on multi-Agent

    Directory of Open Access Journals (Sweden)

    Zheng Yan

    2016-01-01

    Full Text Available To make people at different places participate in the same conference, speak and discuss freely, the interactive remote video conferencing system is designed and realized based on multi-Agent collaboration. FEC (forward error correction and tree P2P technology are firstly used to build a live conference structure to transfer audio and video data; then the branch conference port can participate to speak and discuss through the application of becoming a interactive focus; the introduction of multi-Agent collaboration technology improve the system robustness. The experiments showed that, under normal network conditions, the system can support 350 branch conference node simultaneously to make live broadcasting. The audio and video quality is smooth. It can carry out large-scale remote video conference.

  17. A programmable display layer for virtual reality system architectures

    NARCIS (Netherlands)

    Smit, F.A.; Liere, van R.; Fröhlich, B.

    2010-01-01

    Display systems typically operate at a minimum rate of 60 Hz. However, existing VR-architectures generally produce application updates at a lower rate. Consequently, the display is not updated by the application every display frame. This causes a number of undesirable perceptual artifacts. We

  18. Development of plant status display system for on-site educational training system

    International Nuclear Information System (INIS)

    Yoshimura, Seiichi; Fujimoto, Junzo; Okamoto, Hisatake; Tsunoda, Ryohei; Watanabe, Takao; Masuko, Jiro.

    1986-01-01

    The purpose of this system is to make easy the comprehension of the facility and dynamics of nuclear power plants. This report describes the tendency and future position of how the educational training system should be, and furthermore describes the experiment. Main results are as follows. 1. The present status and the future tendency of educational training system for nuclear power plant operators. CAI (Computer Assisted Instruction) system has following characteristics. (1) It is easy to introduce plant specific characteristics to the educational training. (2) It is easy to execute the detailed training for the compensation of the full-scale simulator. 2. Plant status display system for on-site educational training system. The fundamental function of the system is as follows. (1) It has 2 CRT displays and voice output devices. (2) It has easy manupulation type of man-machine interface. (3) It has the function for the evaluation of the training results. 3. The effectiveness of this system. The effectiveness evaluation test has been carried out by using this system actually. (1) This system has been proved to be essentially effective and some improvements for the future utilization has been pointed out. (2) It should be faster when the CRT displayes are changed, and it should have the explanation function when the plant transients are displayed. (author)

  19. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also

  20. High-speed holographic correlation system for video identification on the internet

    Science.gov (United States)

    Watanabe, Eriko; Ikeda, Kanami; Kodate, Kashiko

    2013-12-01

    Automatic video identification is important for indexing, search purposes, and removing illegal material on the Internet. By combining a high-speed correlation engine and web-scanning technology, we developed the Fast Recognition Correlation system (FReCs), a video identification system for the Internet. FReCs is an application thatsearches through a number of websites with user-generated content (UGC) and detects video content that violates copyright law. In this paper, we describe the FReCs configuration and an approach to investigating UGC websites using FReCs. The paper also illustrates the combination of FReCs with an optical correlation system, which is capable of easily replacing a digital authorization sever in FReCs with optical correlation.

  1. Video-based real-time on-street parking occupancy detection system

    Science.gov (United States)

    Bulan, Orhan; Loce, Robert P.; Wu, Wencheng; Wang, YaoRong; Bernal, Edgar A.; Fan, Zhigang

    2013-10-01

    Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.

  2. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    Science.gov (United States)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  3. An expert display system and nuclear power plant control rooms

    International Nuclear Information System (INIS)

    Beltracchi, L.

    1988-01-01

    An expert display system controls automatically the display of segments on a cathode ray tube's screen to form an image of plant operations. The image consists of an icon of: 1) the process (heat engine cycle), 2) plant control systems, and 3) safety systems. A set of data-driven, forward-chaining computer stored rules control the display of segments. As plant operation changes, measured plant data are processed through the rules, and the results control the deletion and addition of segments to the display format. The icon contains information needed by control rooms operators to monitor plant operations. One example of an expert display is illustrated for the operator's task of monitoring leakage from a safety valve in a steam line of a boiling water reactor (BWR). In another example, the use of an expert display to monitor plant operations during pre-trip, trip, and post-trip operations is discussed as a universal display. The viewpoints and opinions expressed herein are the author's personal ones, and they are not to be interpreted as Nuclear Regulatory Commission criteria, requirements, or guidelines

  4. An Investigation for Arranging the Video Display Unit Information in a Main Control Room of Advanced Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, Chong Cheng; Yang, Chih Wei [Institute of Nuclear Energy Research, Atomic Energy Council, Taoyuan (China)

    2014-08-15

    Current digital instrumentation and control and main control room (MCR) technology has extended the capability of integrating information from numerous plant systems and transmitting needed information to operations personnel in a timely manner that could not be envisioned when previous generation plants were designed and built. A MCR operator can complete all necessary operating actions on the video display unit (VDU). It is extremely flexible and convenient for operators to select and to control the system display on the screen. However, a high degree of digitalization has some risks. For example, in nuclear power plants, failures in the instrumentation and control devices could stop the operation of the plant. Human factors engineering (HFE) approaches would be a manner to solve this problem. Under HFE considerations, there exists 'population stereotype' for operation. That is, the operator is used to operating a specific display on the specific VDU for operation. Under emergency conditions, there is possibility that the operator will response with this habit population stereotype, and not be aware that the current situation has already changed. Accordingly, the advanced nuclear power plant should establish the MCR VDU configuration plan to meet the consistent teamwork goal under normal operation, transient and accident conditions. On the other hand, the advanced nuclear power plant should establish the human factors verification and validation plan of the MCR VDU configuration to verify and validate the configuration of the MCR VDUs, and to ensure that the MCR VDU configuration allows the operator shift to meet the HFE consideration and the consistent teamwork goal under normal operation, transient and accident conditions. This paper is one of the HF V V plans of the MCR VDU configuration of the advanced nuclear power plant. The purpose of this study is to confirm whether the VDU configuration meets the human factors principles and the consistent

  5. An Investigation for Arranging the Video Display Unit Information in a Main Control Room of Advanced Nuclear Power Plants

    International Nuclear Information System (INIS)

    Hsu, Chong Cheng; Yang, Chih Wei

    2014-01-01

    Current digital instrumentation and control and main control room (MCR) technology has extended the capability of integrating information from numerous plant systems and transmitting needed information to operations personnel in a timely manner that could not be envisioned when previous generation plants were designed and built. A MCR operator can complete all necessary operating actions on the video display unit (VDU). It is extremely flexible and convenient for operators to select and to control the system display on the screen. However, a high degree of digitalization has some risks. For example, in nuclear power plants, failures in the instrumentation and control devices could stop the operation of the plant. Human factors engineering (HFE) approaches would be a manner to solve this problem. Under HFE considerations, there exists 'population stereotype' for operation. That is, the operator is used to operating a specific display on the specific VDU for operation. Under emergency conditions, there is possibility that the operator will response with this habit population stereotype, and not be aware that the current situation has already changed. Accordingly, the advanced nuclear power plant should establish the MCR VDU configuration plan to meet the consistent teamwork goal under normal operation, transient and accident conditions. On the other hand, the advanced nuclear power plant should establish the human factors verification and validation plan of the MCR VDU configuration to verify and validate the configuration of the MCR VDUs, and to ensure that the MCR VDU configuration allows the operator shift to meet the HFE consideration and the consistent teamwork goal under normal operation, transient and accident conditions. This paper is one of the HF V V plans of the MCR VDU configuration of the advanced nuclear power plant. The purpose of this study is to confirm whether the VDU configuration meets the human factors principles and the consistent

  6. VME Switch for CERN's PS Analog Video System

    CERN Document Server

    Acebes, I; Heinze, W; Lewis, J; Serrano, J

    2003-01-01

    Analog video signal switching is used in CERN's Proton Synchrotron (PS) complex to route the video signals coming from Beam Diagnostics systems to the Meyrin Control Room (MCR). Traditionally, this has been done with custom electromechanical relay-based cards controlled serially via CAMAC crates. In order to improve the robustness and maintainability of the system, while keeping it analog to preserve the low latency, a VME card based on Analog Devices' AD8116 analog matrix chip has been developed. Video signals go into the front panel and exit the switch through the P2 connector of the VME backplane. The module is a 16 input, 32 output matrix. Larger matrices can be built using more modules and bussing their outputs together, thanks to the high impedance feature of the AD8116. Another VME module takes the selected signals from the P2 connector and performs automatic gain to send them at nominal output level through its front panel. This paper discusses both designs and presents experimental test results.

  7. An effective method of collecting practical knowledge by presentation of videos and related words

    Directory of Open Access Journals (Sweden)

    Satoshi Shimada

    2017-12-01

    Full Text Available The concentration of practical knowledge and experiential knowledge in the form of collective intelligence (the wisdom of the crowd is of interest in the area of skill transfer. Previous studies have confirmed that collective intelligence can be formed through the utilization of video annotation systems where knowledge that is recalled while watching videos of work tasks can be assigned in the form of a comment. The knowledge that can be collected is limited, however, to the content that can be depicted in videos, meaning that it is necessary to prepare many videos when collecting knowledge. This paper proposes a method for expanding the scope of recall from the same video through the automatic generation and simultaneous display of related words and video scenes. Further, the validity of the proposed method is empirically illustrated through the example of a field experiment related to mountaineering skills.

  8. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  9. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing

  10. Actively addressed single pixel full-colour plasmonic display

    Science.gov (United States)

    Franklin, Daniel; Frank, Russell; Wu, Shin-Tson; Chanda, Debashis

    2017-05-01

    Dynamic, colour-changing surfaces have many applications including displays, wearables and active camouflage. Plasmonic nanostructures can fill this role by having the advantages of ultra-small pixels, high reflectivity and post-fabrication tuning through control of the surrounding media. However, previous reports of post-fabrication tuning have yet to cover a full red-green-blue (RGB) colour basis set with a single nanostructure of singular dimensions. Here, we report a method which greatly advances this tuning and demonstrates a liquid crystal-plasmonic system that covers the full RGB colour basis set, only as a function of voltage. This is accomplished through a surface morphology-induced, polarization-dependent plasmonic resonance and a combination of bulk and surface liquid crystal effects that manifest at different voltages. We further demonstrate the system's compatibility with existing LCD technology by integrating it with a commercially available thin-film-transistor array. The imprinted surface interfaces readily with computers to display images as well as video.

  11. Exterior field evaluation of new generation video motion detection systems

    International Nuclear Information System (INIS)

    Malone, T.P.

    1988-01-01

    Recent advancements in video motion detection (VMD) system design and technology have resulted in several new commercial VMD systems. Considerable interest in the new VMD systems has been generated because the systems are advertised to work effectively in exterior applications. Previous VMD systems, when used in an exterior environment, tended to have very high nuisance alarm rates due to weather conditions, wildlife activity and lighting variations. The new VMD systems advertise more advanced processing of the incoming video signal which is aimed at rejecting exterior environmental nuisance alarm sources while maintaining a high detection capability. This paper discusses the results of field testing, in an exterior environment, of two new VMD systems

  12. A new concept of safety parameter display system

    International Nuclear Information System (INIS)

    Martinez, A.S.; Oliveira, L.F.S. de; Schirru, R.; Thome Filho, Z.D.; Silva, R.A. da.

    1986-07-01

    A general description of Angra-1 Parameter Display System (SSPA), a real time and on-line computerized monitoring system for the parameters related to the power plant safety is presented. This system has the main purpose of diminish the load on the Angra-1 power plant operators at an emergency event by supplying them with the additional tools serving as the basis for a prompt identification of the accident. The SSPA is a kind of safety parameter display system whose concept was introduced after Three Mile Island accident in USA. The SSPA comprises two nuclear applications independently considered. They are included into the Parameters Monitoring Integrated System (SIMP) and the safety critical function system (SFCS). (Author) [pt

  13. Toward enhancing the distributed video coder under a multiview video codec framework

    Science.gov (United States)

    Lee, Shih-Chieh; Chen, Jiann-Jone; Tsai, Yao-Hong; Chen, Chin-Hua

    2016-11-01

    The advance of video coding technology enables multiview video (MVV) or three-dimensional television (3-D TV) display for users with or without glasses. For mobile devices or wireless applications, a distributed video coder (DVC) can be utilized to shift the encoder complexity to decoder under the MVV coding framework, denoted as multiview distributed video coding (MDVC). We proposed to exploit both inter- and intraview video correlations to enhance side information (SI) and improve the MDVC performance: (1) based on the multiview motion estimation (MVME) framework, a categorized block matching prediction with fidelity weights (COMPETE) was proposed to yield a high quality SI frame for better DVC reconstructed images. (2) The block transform coefficient properties, i.e., DCs and ACs, were exploited to design the priority rate control for the turbo code, such that the DVC decoding can be carried out with fewest parity bits. In comparison, the proposed COMPETE method demonstrated lower time complexity, while presenting better reconstructed video quality. Simulations show that the proposed COMPETE can reduce the time complexity of MVME to 1.29 to 2.56 times smaller, as compared to previous hybrid MVME methods, while the image peak signal to noise ratios (PSNRs) of a decoded video can be improved 0.2 to 3.5 dB, as compared to H.264/AVC intracoding.

  14. Specialized video systems for use in waste tanks

    International Nuclear Information System (INIS)

    Anderson, E.K.; Robinson, C.W.; Heckendorn, F.M.

    1992-01-01

    The Robotics Development Group at the Savannah River Site is developing a remote video system for use in underground radioactive waste storage tanks at the Savannah River Site, as a portion of its site support role. Viewing of the tank interiors and their associated annular spaces is an extremely valuable tool in assessing their condition and controlling their operation. Several specialized video systems have been built that provide remote viewing and lighting, including remotely controlled tank entry and exit. Positioning all control components away from the facility prevents the potential for personnel exposure to radiation and contamination. The SRS waste tanks are nominal 4.5 million liter (1.3 million gallon) underground tanks used to store liquid high level radioactive waste generated by the site, awaiting final disposal. The typical waste tank (Figure 1) is of flattened shape (i.e. wider than high). The tanks sit in a dry secondary containment pan. The annular space between the tank wall and the secondary containment wall is continuously monitored for liquid intrusion and periodically inspected and documented. The latter was historically accomplished with remote still photography. The video systems includes camera, zoom lens, camera positioner, and vertical deployment. The assembly enters through a 125 mm (5 in) diameter opening. A special attribute of the systems is they never get larger than the entry hole during camera aiming etc. and can always be retrieved. The latest systems are easily deployable to a remote setup point and can extend down vertically 15 meters (50ft). The systems are expected to be a valuable asset to tank operations

  15. Information retrieval and display system

    Science.gov (United States)

    Groover, J. L.; King, W. L.

    1977-01-01

    Versatile command-driven data management system offers users, through simplified command language, a means of storing and searching data files, sorting data files into specified orders, performing simple or complex computations, effecting file updates, and printing or displaying output data. Commands are simple to use and flexible enough to meet most data management requirements.

  16. Paradigm for expert display systems in nuclear plant and elsewhere

    International Nuclear Information System (INIS)

    Gabriel, J.R.

    1986-02-01

    Display of relevant data concerning plant operation has been a concern of the nuclear industry from its beginnings. Since the incident at Three Mile Island, this matter has had much careful scrutiny. L. Beltracchi, in particular, has originated a sequence of important steps to improve the operator's ability to recognize plant states and their changes. In the early 1980's, Beltracchi (1983, 1984) proposed a display based on the Rankine cycle for light water reactors. More recently, in an unpublished work (1986b), he described an extension that includes a small, rule-based system in the display program, drawing inferences about plant operation from sensor readings, and displaying those inferences on the Rankine display. Our paper examines Beltracchi's rule-based display from the perspective of knowledge bases. Earlier (Gabriel, 1983) we noted that analytical models of system behavior are just as much a knowledge base as are the rules of a conventional expert system. The problem of finding useful displays for a complex plant is discussed from this perspective. We then present a paradigm for developing designs with properties similar to those in Beltracchi's Rankine cycle display. Finally, to clarify the issue, we give a small example from an imaginary plant

  17. Status, recent developments and perspective of TINE-powered video system, release 3

    International Nuclear Information System (INIS)

    Weisse, S.; Melkumyan, D.; Duval, P.

    2012-01-01

    Experience has shown that imaging software and hardware installations at accelerator facilities needs to be changed, adapted and updated on a semi-permanent basis. On this premise the component-based core architecture of Video System 3 was founded. In design and implementation, emphasis was, is, and will be put on flexibility, performance, low latency, modularity, inter operability, use of open source, ease of use as well as reuse, good documentation and multi-platform capability. In the past year, a milestone was reached as Video System 3 entered production-level at PITZ, Hasylab and PETRA III. Since then, the development path has been more strongly influenced by production-level experience and customer feedback. In this contribution, we describe the current status, layout, recent developments and perspective of the Video System. Focus will be put on integration of recording and playback of video sequences to Archive/DAQ, a standalone installation of the Video System on a notebook as well as experiences running on Windows 7-64 bit. In addition, new client-side multi-platform GUI/application developments using Java are about to hit the surface. Last but not least it must be mentioned that although the implementation of Release 3 is integrated into the TINE control system, it is modular enough so that integration into other control systems can be considered. (authors)

  18. Impact of video games on plasticity of the hippocampus.

    Science.gov (United States)

    West, G L; Konishi, K; Diarra, M; Benady-Chorney, J; Drisdelle, B L; Dahmani, L; Sodums, D J; Lepore, F; Jolicoeur, P; Bohbot, V D

    2017-08-08

    The hippocampus is critical to healthy cognition, yet results in the current study show that action video game players have reduced grey matter within the hippocampus. A subsequent randomised longitudinal training experiment demonstrated that first-person shooting games reduce grey matter within the hippocampus in participants using non-spatial memory strategies. Conversely, participants who use hippocampus-dependent spatial strategies showed increased grey matter in the hippocampus after training. A control group that trained on 3D-platform games displayed growth in either the hippocampus or the functionally connected entorhinal cortex. A third study replicated the effect of action video game training on grey matter in the hippocampus. These results show that video games can be beneficial or detrimental to the hippocampal system depending on the navigation strategy that a person employs and the genre of the game.Molecular Psychiatry advance online publication, 8 August 2017; doi:10.1038/mp.2017.155.

  19. ESVD: An Integrated Energy Scalable Framework for Low-Power Video Decoding Systems

    Directory of Open Access Journals (Sweden)

    Wen Ji

    2010-01-01

    Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.

  20. Face detection for interactive tabletop viewscreen system using olfactory display

    Science.gov (United States)

    Sakamoto, Kunio; Kanazawa, Fumihiro

    2009-10-01

    An olfactory display is a device that delivers smells to the nose. It provides us with special effects, for example to emit smell as if you were there or to give a trigger for reminding us of memories. The authors have developed a tabletop display system connected with the olfactory display. For delivering a flavor to user's nose, the system needs to recognition and measure positions of user's face and nose. In this paper, the authors describe an olfactory display which enables to detect the nose position for an effective delivery.

  1. Reduced complexity MPEG2 video post-processing for HD display

    DEFF Research Database (Denmark)

    Virk, Kamran; Li, Huiying; Forchhammer, Søren

    2008-01-01

    implementation. The enhanced deringing combined with the deblocking achieves PSNR improvements on average of 0.5 dB over the basic deblocking and deringing on SDTV and HDTV test sequences. The deblocking and deringing models described in the paper are generic and applicable to a wide variety of common (8times8......) DCT-block based real-time video schemes....

  2. Open control/display system for a telerobotics work station

    Science.gov (United States)

    Keslowitz, Saul

    1987-01-01

    A working Advanced Space Cockpit was developed that integrated advanced control and display devices into a state-of-the-art multimicroprocessor hardware configuration, using window graphics and running under an object-oriented, multitasking real-time operating system environment. This Open Control/Display System supports the idea that the operator should be able to interactively monitor, select, control, and display information about many payloads aboard the Space Station using sets of I/O devices with a single, software-reconfigurable workstation. This is done while maintaining system consistency, yet the system is completely open to accept new additions and advances in hardware and software. The Advanced Space Cockpit, linked to Grumman's Hybrid Computing Facility and Large Amplitude Space Simulator (LASS), was used to test the Open Control/Display System via full-scale simulation of the following tasks: telerobotic truss assembly, RCS and thermal bus servicing, CMG changeout, RMS constrained motion and space constructible radiator assembly, HPA coordinated control, and OMV docking and tumbling satellite retrieval. The proposed man-machine interface standard discussed has evolved through many iterations of the tasks, and is based on feedback from NASA and Air Force personnel who performed those tasks in the LASS.

  3. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    Science.gov (United States)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  4. Effects of training intervention on non-ergonomic positions among video display terminals (VDT) users.

    Science.gov (United States)

    Mirmohammadi, Seyed Jalil; Mehrparvar, Amir Houshang; Olia, Mohammad Bagher; Mirmohammadi, Monirolsadat

    2012-01-01

    Substantial evidence shows an association between musculoskeletal disorders (MSDs) and certain work-related physical factors. One of the jobs with known ergonomic hazards is working with video display terminals (VDTs). Redesign, ergonomic improvements, and education have generally been recommended as solutions for the prevention of musculoskeletal disorders. We designed this study to assess the effects of ergonomic training on the working postures of VDT users. In an intervention study, we assessed the impact of ergonomic training on the ergonomic hazards and work postures in employees working with VDTs. Participants and their workstations were assessed by Rapid Upper Limb Assessment (RULA) method before and after training. 70 employees of an office, working with a VDT more than four hours per day entered the study. The greatest compliance with OSHA workstation recommendations was seen with the monitor (21.4% of cases) and the least compliance with the one was the chair (10.0%). Mean RULA score before and after intervention were 5.90, and 5.07, respectively, and the difference was statistically significant (p training office ergonomics to the VDT users, even without changing work place components can significantly improve VDT users' behavior and ability to properly fit a workstation to him/herself.

  5. Development of the video streaming system for the radiation safety training

    International Nuclear Information System (INIS)

    Uemura, Jitsuya

    2005-01-01

    Radiation workers have to receive the radiation safety training every year. It is very hard for them to receive the training within a limited chance of training. Then, we developed the new training system using the video streaming technique and opened the web page for the training on our homepage. Every worker is available to receive the video lecture at any time and at any place by using his PC via internet. After watching the video, the worker should receive the completion examination. It he can pass the examination, he was registered as a radiation worker by the database system for radiation control. (author)

  6. Safety parameter display system (SPDS) for Russian-designed NPPs

    International Nuclear Information System (INIS)

    Anikanov, S.S.; Catullo, W.J.; Pelusi, J.L.

    1997-01-01

    As part of the programs aimed at improving the safety of Russian-designed reactors, the US DoE has sponsored a project of providing a safety parameter display system (SPDS) for nuclear power plants with such reactors. The present paper is focused mostly on the system architecture design features of SPDS systems for WWER-1000 and RBMK-1000 reactors. The function and the operating modes of the SPDS are outlined, and a description of the display system is given. The system architecture and system design of both an integrated and a stand-alone IandC system is explained. (A.K.)

  7. Virtual vision system with actual flavor by olfactory display

    Science.gov (United States)

    Sakamoto, Kunio; Kanazawa, Fumihiro

    2010-11-01

    The authors have researched multimedia system and support system for nursing studies on and practices of reminiscence therapy and life review therapy. The concept of the life review is presented by Butler in 1963. The process of thinking back on one's life and communicating about one's life to another person is called life review. There is a famous episode concerning the memory. It is called as Proustian effects. This effect is mentioned on the Proust's novel as an episode that a story teller reminds his old memory when he dipped a madeleine in tea. So many scientists research why smells trigger the memory. The authors pay attention to the relation between smells and memory although the reason is not evident yet. Then we have tried to add an olfactory display to the multimedia system so that the smells become a trigger of reminding buried memories. An olfactory display is a device that delivers smells to the nose. It provides us with special effects, for example to emit smell as if you were there or to give a trigger for reminding us of memories. The authors have developed a tabletop display system connected with the olfactory display. For delivering a flavor to user's nose, the system needs to recognition and measure positions of user's face and nose. In this paper, the authors describe an olfactory display which enables to detect the nose position for an effective delivery.

  8. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    real world); proprioceptive and exteroceptive sensors allowing the recreating of the 3D geometric database of an environment (virtual world). The virtual world is projected onto a video display terminal (VDT). Computer-generated and video ...

  9. Portable digital video surveillance system for monitoring flower-visiting bumblebees

    Directory of Open Access Journals (Sweden)

    Thorsdatter Orvedal Aase, Anne Lene

    2011-08-01

    Full Text Available In this study we used a portable event-triggered video surveillance system for monitoring flower-visiting bumblebees. The system consist of mini digital recorder (mini-DVR with a video motion detection (VMD sensor which detects changes in the image captured by the camera, the intruder triggers the recording immediately. The sensitivity and the detection area are adjustable, which may prevent unwanted recordings. To our best knowledge this is the first study using VMD sensor to monitor flower-visiting insects. Observation of flower-visiting insects has traditionally been monitored by direct observations, which is time demanding, or by continuous video monitoring, which demands a great effort in reviewing the material. A total of 98.5 monitoring hours were conducted. For the mini-DVR with VMD, a total of 35 min were spent reviewing the recordings to locate 75 pollinators, which means ca. 0.35 sec reviewing per monitoring hr. Most pollinators in the order Hymenoptera were identified to species or group level, some were only classified to family (Apidae or genus (Bombus. The use of the video monitoring system described in the present paper could result in a more efficient data sampling and reveal new knowledge to pollination ecology (e.g. species identification and pollinating behaviour.

  10. Health issues of the operators on video display units - the consequence of electromagnetic radiation or something else

    International Nuclear Information System (INIS)

    Brumen, V.; Garaj-Vrhovac, V.; Franekic Colic, J.; Radalj, Z.

    2005-01-01

    Over the last few decades, video display units (VDUs) have become inevitable in a number of workplaces. This raised a debate on possible health effects of occupational exposure to VDUs. Most frequently reported complaints are eyestrain, bone/muscle disorders and psychological problems related to monotonous, repetitive work. Questions have been raised whether regular exposure to electromagnetic fields generated by the screen could induce cataract. In response to a direct inquiry of VDU operators from a large Croatian company, whether their eye discomforts could be attributed to screen irradiation, we measured electromagnetic fields generated by screens. Measurements were performed using a portable measuring device PMM 8051, serial number 0182 (PMM Costruzioni Elettroniche Centro Misure Radioelettriche s.r. Milan, Italy) with three different probes, with and without a filter. The results showed that screen-generated fields were far bellow the threshold required for a causing a cataract, which overruled the possibility that the actinic effect should produce eye discomfort.(author)

  11. Toxicologic study of electromagnetic radiation emitted by television and video display screens and cellular telephones on chickens and mice

    International Nuclear Information System (INIS)

    Bastide, M.; Youbicier-Simo, B.J.; Lebecq, J.C.; Giaimis, J.; Youbicier-Simo, B.J.

    2001-01-01

    The effects of continuous exposure of chick embryos and young chickens to the electromagnetic fields (EMFs) emitted by video display units (VDUs) and GSM cell phone radiation, either the whole spectrum emitted or attenuated by a copper gauze, were investigated. Permanent exposure to the EMFs radiated by a VDU was associated with significantly increased fetal loss (47-68%) and markedly depressed levels of circulating specific antibodies (lgG), corticosterone and melatonin. We have also shown that under chronic exposure conditions, GSM cell phone radiation was harmful to chick embryos, stressful for healthy mice and, in this species, synergistic with cancer insofar as it depleted stress hormones. The same pathological results were observed after substantial reduction of the microwaves radiated from the cell phone by attenuating them with a copper gauze. (author)

  12. VAP/VAT: video analytics platform and test bed for testing and deploying video analytics

    Science.gov (United States)

    Gorodnichy, Dmitry O.; Dubrofsky, Elan

    2010-04-01

    Deploying Video Analytics in operational environments is extremely challenging. This paper presents a methodological approach developed by the Video Surveillance and Biometrics Section (VSB) of the Science and Engineering Directorate (S&E) of the Canada Border Services Agency (CBSA) to resolve these problems. A three-phase approach to enable VA deployment within an operational agency is presented and the Video Analytics Platform and Testbed (VAP/VAT) developed by the VSB section is introduced. In addition to allowing the integration of third party and in-house built VA codes into an existing video surveillance infrastructure, VAP/VAT also allows the agency to conduct an unbiased performance evaluation of the cameras and VA software available on the market. VAP/VAT consists of two components: EventCapture, which serves to Automatically detect a "Visual Event", and EventBrowser, which serves to Display & Peruse of "Visual Details" captured at the "Visual Event". To deal with Open architecture as well as with Closed architecture cameras, two video-feed capture mechanisms have been developed within the EventCapture component: IPCamCapture and ScreenCapture.

  13. A remote educational system in medicine using digital video.

    Science.gov (United States)

    Hahm, Joon Soo; Lee, Hang Lak; Kim, Sun Il; Shimizu, Shuji; Choi, Ho Soon; Ko, Yong; Lee, Kyeong Geun; Kim, Tae Eun; Yun, Ji Won; Park, Yong Jin; Naoki, Nakashima; Koji, Okamura

    2007-03-01

    Telemedicine has opened the door to a wide range of learning experience and simultaneous feedback to doctors and students at various remote locations. However, there are limitations such as lack of approved international standards of ethics. The aim of our study was to establish a telemedical education system through the development of high quality images, using the digital transfer system on a high-speed network. Using telemedicine, surgical images can be sent not only to domestic areas but also abroad, and opinions regarding surgical procedures can be exchanged between the operation room and a remote place. The Asia Pacific Information Infrastrucuture (APII) link, a submarine cable between Busan and Fukuoka, was used to connect Korea with Japan, and Korea Advanced Research Network (KOREN) was used to connect Busan with Seoul. Teleconference and video streaming between Hanyang University Hospital in Seoul and Kyushu University Hospital in Japan were realized using Digital Video Transfer System (DVTS) over Ipv4 network. Four endoscopic surgeries were successfully transmitted between Seoul and Kyushu, while concomitant teleconferences took place between the two throughout the operations. Enough bandwidth of 60 Mbps could be kept for two-line transmissions. The quality of transmitted video image had no frame loss with a rate of 30 images per second. The sound was also clear, and time delay was less than 0.3 sec. Our experience has demonstrated the feasibility of domestic and international telemedicine. We have established an international medical network with high-quality video transmission over Internet protocol, which is easy to perform, reliable, and economical. Our network system may become a promising tool for worldwide telemedical communication in the future.

  14. Replicas Strategy and Cache Optimization of Video Surveillance Systems Based on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Rongheng Li

    2018-04-01

    Full Text Available With the rapid development of video surveillance technology, especially the popularity of cloud-based video surveillance applications, video data begins to grow explosively. However, in the cloud-based video surveillance system, replicas occupy an amount of storage space. Also, the slow response to video playback constrains the performance of the system. In this paper, considering the characteristics of video data comprehensively, we propose a dynamic redundant replicas mechanism based on security levels that can dynamically adjust the number of replicas. Based on the location correlation between cameras, this paper also proposes a data cache strategy to improve the response speed of data reading. Experiments illustrate that: (1 our dynamic redundant replicas mechanism can save storage space while ensuring data security; (2 the cache mechanism can predict the playback behaviors of the users in advance and improve the response speed of data reading according to the location and time correlation of the front-end cameras; and (3 in terms of cloud-based video surveillance, our proposed approaches significantly outperform existing methods.

  15. Video interactivo en realidad virtual inmersiva

    OpenAIRE

    Gordo Ara, Juan

    2016-01-01

    Currently, developers are creating new virtual reality applications related to the field of video games or graphics environments created by computers. This is due largely to the arrival to the consumer market of new technologies to experience these virtual reality environments. This has provoked a wide adoption of 360º videos, which can be viewed directly from smartphones. In addition, cheap adapters allow converting the phone into a virtual reality display. In this project we investigated me...

  16. Integrating IPix immersive video surveillance with unattended and remote monitoring (UNARM) systems

    International Nuclear Information System (INIS)

    Michel, K.D.; Klosterbuer, S.F.; Langner, D.C.

    2004-01-01

    Commercially available IPix cameras and software are being researched as a means by which an inspector can be virtually immersed into a nuclear facility. A single IPix camera can provide 360 by 180 degree views with full pan-tilt-zoom capability, and with no moving parts on the camera mount. Immersive video technology can be merged into the current Unattended and Remote Monitoring (UNARM) system, thereby providing an integrated system of monitoring capabilities that tie together radiation, video, isotopic analysis, Global Positioning System (GPS), etc. The integration of the immersive video capability with other monitoring methods already in place provides a significantly enhanced situational awareness to the International Atomic Energy Agency (IAEA) inspectors.

  17. Interactive video audio system: communication server for INDECT portal

    Science.gov (United States)

    Mikulec, Martin; Voznak, Miroslav; Safarik, Jakub; Partila, Pavol; Rozhon, Jan; Mehic, Miralem

    2014-05-01

    The paper deals with presentation of the IVAS system within the 7FP EU INDECT project. The INDECT project aims at developing the tools for enhancing the security of citizens and protecting the confidentiality of recorded and stored information. It is a part of the Seventh Framework Programme of European Union. We participate in INDECT portal and the Interactive Video Audio System (IVAS). This IVAS system provides a communication gateway between police officers working in dispatching centre and police officers in terrain. The officers in dispatching centre have capabilities to obtain information about all online police officers in terrain, they can command officers in terrain via text messages, voice or video calls and they are able to manage multimedia files from CCTV cameras or other sources, which can be interesting for officers in terrain. The police officers in terrain are equipped by smartphones or tablets. Besides common communication, they can reach pictures or videos sent by commander in office and they can respond to the command via text or multimedia messages taken by their devices. Our IVAS system is unique because we are developing it according to the special requirements from the Police of the Czech Republic. The IVAS communication system is designed to use modern Voice over Internet Protocol (VoIP) services. The whole solution is based on open source software including linux and android operating systems. The technical details of our solution are presented in the paper.

  18. Musculoskeletal disorders among video display terminal (VDT workers comparing with other office workers

    Directory of Open Access Journals (Sweden)

    H. Akbari

    2010-07-01

    Full Text Available Background and aimsScientific and industrial development has led to increased production,which has been associated with different complications, including occupational stress, and increased incidence of work-related musculoskeletal disorders. Musculoskeletal disorders arefrequent causes of absenteeism in developed countries. We designed this study to assess musculoskeletal disorders and occupational stress among video display terminal (VDT workers in comparison with other office workers.MethodsThis was a cross-sectional study on 72 VDT workers (case and 145 office workers (control. In this study we used Nordic and Osipow questionnaires in order to evaluate musculoskeletal disorders and job stress, respectively. The questionnaires were filled by direct interview. T test, chi square, Fisher test and logistic regression were used for data analysis.ResultsThe frequency of musculoskeletal disorders among VDT users in the last 12 months was 46.5%, 20.3%, 5.1%, 12.4% and 57.6% in neck, shoulder, elbow, wrist and low back areas, respectively. The frequency of musculoskeletal complaints in neck, shoulder and wrist and mean score of occupational stress was significantly higher in the case group comparing with controlgroup, and both results were statistically significant.ConclusionVDT working is a high-risk job for musculoskeletal disorders. In this study the frequency of musculoskeletal disorders, especially in high-risk regions for this job, was higher in VDTworkers than other office workers. We recommend to perform other studies in order to find non-ergonomic points and postures in these persons.

  19. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  20. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1995-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser

  1. Virtual Pinball / Video Arcade games

    NARCIS (Netherlands)

    1997-01-01

    For use in multimedia or other environments, a virtual pinball/video arcade game displays one or more computer-generated runner elements, runner inject elements, and runner interactivity elements. It has a programmed computer for simulating movement of the runner elements. This is interfered with by

  2. Three-dimensional visualization and display technologies; Proceedings of the Meeting, Los Angeles, CA, Jan. 18-20, 1989

    International Nuclear Information System (INIS)

    Robbins, W.E.; Fisher, S.S.

    1989-01-01

    Special attention was given to problems of stereoscopic display devices, such as CAD for enhancement of the design process in visual arts, stereo-TV improvement of remote manipulator performance, a voice-controlled stereographic video camera system, and head-mounted displays and their low-cost design alternatives. Also discussed was a novel approach to chromostereoscopic microscopy, computer-generated barrier-strip autostereography and lenticular stereograms, and parallax barrier three-dimensional TV. Additional topics include processing and user interface isssues and visualization applications, including automated analysis and fliud flow topology, optical tomographic measusrements of mixing fluids, visualization of complex data, visualization environments, and visualization management systems

  3. Virtual Video Prototyping for Healthcare Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Bossen, Claus; Lykke-Olesen, Andreas

    2002-01-01

    Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...... issues, since one cannot avoid paying attention to the physical, real-world constraints and to details in the usage-interaction between users and technology. From the users' perspective, during our evaluation of the virtual video prototype, we experienced how it enabled users to relate...

  4. Cost-Effective Video Filtering Solution for Real-Time Vision Systems

    Directory of Open Access Journals (Sweden)

    Karl Martin

    2005-08-01

    Full Text Available This paper presents an efficient video filtering scheme and its implementation in a field-programmable logic device (FPLD. Since the proposed nonlinear, spatiotemporal filtering scheme is based on order statistics, its efficient implementation benefits from a bit-serial realization. The utilization of both the spatial and temporal correlation characteristics of the processed video significantly increases the computational demands on this solution, and thus, implementation becomes a significant challenge. Simulation studies reported in this paper indicate that the proposed pipelined bit-serial FPLD filtering solution can achieve speeds of up to 97.6 Mpixels/s and consumes 1700 to 2700 logic cells for the speed-optimized and area-optimized versions, respectively. Thus, the filter area represents only 6.6 to 10.5% of the Altera STRATIX EP1S25 device available on the Altera Stratix DSP evaluation board, which has been used to implement a prototype of the entire real-time vision system. As such, the proposed adaptive video filtering scheme is both practical and attractive for real-time machine vision and surveillance systems as well as conventional video and multimedia applications.

  5. Head Worn Display System for Equivalent Visual Operations

    Science.gov (United States)

    Cupero, Frank; Valimont, Brian; Wise, John; Best. Carl; DeMers, Bob

    2009-01-01

    Head-Worn Displays or so-called, near-to-eye displays have potentially significant advantages in terms of cost, overcoming cockpit space constraints, and for the display of spatially-integrated information. However, many technical issues need to be overcome before these technologies can be successfully introduced into commercial aircraft cockpits. The results of three activities are reported. First, the near-to-eye display design, technological, and human factors issues are described and a literature review is presented. Second, the results of a fixed-base piloted simulation, investigating the impact of near to eye displays on both operational and visual performance is reported. Straight-in approaches were flown in simulated visual and instrument conditions while using either a biocular or a monocular display placed on either the dominant or non-dominant eye. The pilot's flight performance, visual acuity, and ability to detect unsafe conditions on the runway were tested. The data generally supports a monocular design with minimal impact due to eye dominance. Finally, a method for head tracker system latency measurement is developed and used to compare two different devices.

  6. High Performance Paper White- and Full-Color Reflective Displays

    National Research Council Canada - National Science Library

    Fiske, Thomas

    2001-01-01

    This report documents work performed by a team led by dpiX LLC to develop fabrication technology for a paper-white, video-rate, full-color reflective display technology based on holographically formed...

  7. Volumetric 3D Display System with Static Screen

    Science.gov (United States)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  8. Inexpensive remote video surveillance system with microcomputer and solar cells

    International Nuclear Information System (INIS)

    Guevara Betancourt, Edder

    2013-01-01

    A low-cost prototype is developed with a RPI plate for remote video surveillance. Additionally, the theoretical basis to provide energy independence have developed through solar cells and a battery bank. Some existing commercial monitoring systems are studied and analyzed, components such as: cameras, communication devices (WiFi and 3G), free software packages for video surveillance, control mechanisms and theory remote photovoltaic systems. A number of steps are developed to implement the module and install, configure and test each of the elements of hardware and software that make up the module, exploring the feasibility of providing intelligence to the system using the software chosen. Events that have been generated by motion detection have been simple, intuitive way to view, archive and extract. The implementation of the module by a microcomputer video surveillance and motion detection software (Zoneminder) has been an option for a lot of potential; as the platform for monitoring and recording data has provided all the tools to make a robust and secure surveillance. (author) [es

  9. Method for operating video game with back-feeding a video image of a player, and a video game arranged for practicing the method.

    NARCIS (Netherlands)

    2006-01-01

    In a video gaming environment, a player is enabled to interact with the environment. Further, a score and/or performance of the player in a particular session is machine detected and fed fed back into the gaming environment and a representation of said score and/or performance is displayed in visual

  10. Data compression systems for home-use digital video recording

    NARCIS (Netherlands)

    With, de P.H.N.; Breeuwer, M.; van Grinsven, P.A.M.

    1992-01-01

    The authors focus on image data compression techniques for digital recording. Image coding for storage equipment covers a large variety of systems because the applications differ considerably in nature. Video coding systems suitable for digital TV and HDTV recording and digital electronic still

  11. The design of red-blue 3D video fusion system based on DM642

    Science.gov (United States)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  12. Ranking Highlights in Personal Videos by Analyzing Edited Videos.

    Science.gov (United States)

    Sun, Min; Farhadi, Ali; Chen, Tseng-Hung; Seitz, Steve

    2016-11-01

    We present a fully automatic system for ranking domain-specific highlights in unconstrained personal videos by analyzing online edited videos. A novel latent linear ranking model is proposed to handle noisy training data harvested online. Specifically, given a targeted domain such as "surfing," our system mines the YouTube database to find pairs of raw and their corresponding edited videos. Leveraging the assumption that an edited video is more likely to contain highlights than the trimmed parts of the raw video, we obtain pair-wise ranking constraints to train our model. The learning task is challenging due to the amount of noise and variation in the mined data. Hence, a latent loss function is incorporated to mitigate the issues caused by the noise. We efficiently learn the latent model on a large number of videos (about 870 min in total) using a novel EM-like procedure. Our latent ranking model outperforms its classification counterpart and is fairly competitive compared with a fully supervised ranking system that requires labels from Amazon Mechanical Turk. We further show that a state-of-the-art audio feature mel-frequency cepstral coefficients is inferior to a state-of-the-art visual feature. By combining both audio-visual features, we obtain the best performance in dog activity, surfing, skating, and viral video domains. Finally, we show that impressive highlights can be detected without additional human supervision for seven domains (i.e., skating, surfing, skiing, gymnastics, parkour, dog activity, and viral video) in unconstrained personal videos.

  13. Localization of cask and plug remote handling system in ITER using multiple video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, João, E-mail: jftferreira@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2013-10-15

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.

  14. Localization of cask and plug remote handling system in ITER using multiple video cameras

    International Nuclear Information System (INIS)

    Ferreira, João; Vale, Alberto; Ribeiro, Isabel

    2013-01-01

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building

  15. A programmable display layer for virtual reality system architectures.

    Science.gov (United States)

    Smit, Ferdi Alexander; van Liere, Robert; Froehlich, Bernd

    2010-01-01

    Display systems typically operate at a minimum rate of 60 Hz. However, existing VR-architectures generally produce application updates at a lower rate. Consequently, the display is not updated by the application every display frame. This causes a number of undesirable perceptual artifacts. We describe an architecture that provides a programmable display layer (PDL) in order to generate updated display frames. This replaces the default display behavior of repeating application frames until an update is available. We will show three benefits of the architecture typical to VR. First, smooth motion is provided by generating intermediate display frames by per-pixel depth-image warping using 3D motion fields. Smooth motion eliminates various perceptual artifacts due to judder. Second, we implement fine-grained latency reduction at the display frame level using a synchronized prediction of simulation objects and the viewpoint. This improves the average quality and consistency of latency reduction. Third, a crosstalk reduction algorithm for consecutive display frames is implemented, which improves the quality of stereoscopic images. To evaluate the architecture, we compare image quality and latency to that of a classic level-of-detail approach.

  16. Research of real-time video processing system based on 6678 multi-core DSP

    Science.gov (United States)

    Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang

    2017-10-01

    In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.

  17. INFORMATION DISPLAY: CONSIDERATIONS FOR DESIGNING COMPUTER-BASED DISPLAY SYSTEMS

    International Nuclear Information System (INIS)

    O'HARA, J.M.; PIRUS, D.; BELTRATCCHI, L.

    2004-01-01

    This paper discussed the presentation of information in computer-based control rooms. Issues associated with the typical displays currently in use are discussed. It is concluded that these displays should be augmented with new displays designed to better meet the information needs of plant personnel and to minimize the need for interface management tasks (the activities personnel have to do to access and organize the information they need). Several approaches to information design are discussed, specifically addressing: (1) monitoring, detection, and situation assessment; (2) routine task performance; and (3) teamwork, crew coordination, collaborative work

  18. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    Science.gov (United States)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  19. Simulated monitor display for CCTV

    International Nuclear Information System (INIS)

    Steele, B.J.

    1982-01-01

    Two computer programs have been developed which generate a two-dimensional graphic perspective of the video output produced by a Closed Circuit Television (CCTV) camera. Both programs were primarily written to produce a graphic display simulating the field-of-view (FOV) of a perimeter assessment system as seen on a CCTV monitor. The original program was developed for use on a Tektronix 4054 desktop computer; however, the usefulness of this graphic display program led to the development of a similar program for a Hewlett-Packard 9845B desktop computer. After entry of various input parameters, such as, camera lens and orientation, the programs automatically calculate and graphically plot the locations of various items, e.g., fences, an assessment zone, running men, and intrusion detection sensors. Numerous special effects can be generated to simulate such things as roads, interior walls, or sides of buildings. Other objects can be digitized and entered into permanent memory similar to the running men. With this type of simulated monitor perspective, proposed camera locations with respect to fences and a particular assessment zone can be rapidly evaluated without the costly time delays and expenditures associated with field evaluation

  20. Virtual navigation performance: the relationship to field of view and prior video gaming experience.

    Science.gov (United States)

    Richardson, Anthony E; Collaer, Marcia L

    2011-04-01

    Two experiments examined whether learning a virtual environment was influenced by field of view and how it related to prior video gaming experience. In the first experiment, participants (42 men, 39 women; M age = 19.5 yr., SD = 1.8) performed worse on a spatial orientation task displayed with a narrow field of view in comparison to medium and wide field-of-view displays. Counter to initial hypotheses, wide field-of-view displays did not improve performance over medium displays, and this was replicated in a second experiment (30 men, 30 women; M age = 20.4 yr., SD = 1.9) presenting a more complex learning environment. Self-reported video gaming experience correlated with several spatial tasks: virtual environment pointing and tests of Judgment of Line Angle and Position, mental rotation, and Useful Field of View (with correlations between .31 and .45). When prior video gaming experience was included as a covariate, sex differences in spatial tasks disappeared.

  1. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Directory of Open Access Journals (Sweden)

    Chen Homer H

    2007-01-01

    Full Text Available The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  2. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Science.gov (United States)

    Lu, Meng-Ting; Yao, Jason J.; Chen, Homer H.

    2007-12-01

    The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  3. Head-mounted display for use in functional endoscopic sinus surgery

    Science.gov (United States)

    Wong, Brian J.; Lee, Jon P.; Dugan, F. Markoe; MacArthur, Carol J.

    1995-05-01

    Since the introduction of functional endoscopic sinus surgery (FESS), the procedure has undergone rapid change with evolution keeping pace with technological advances. The advent of low cost charge coupled device 9CCD) cameras revolutionized the practice and instruction of FESS. Video-based FESS has allowed for documentation of the surgical procedure as well as interactive instruction during surgery. Presently, the technical requirements of video-based FESS include the addition of one or more television monitors positioned strategically in the operating room. Thought video monitors have greatly enhanced surgical endoscopy by re- involving nurses and assistants in the actual mechanics of surgery, video monitors require the operating surgeon to be focused on the screen instead of the patient. In this study, we describe the use of a new low-cost liquid crystal display (LCD) based device that functions as a monitor but is mounted on the head on a visor (PT-O1, O1 Products, Westlake Village, CA). This study illustrates the application of these HMD devices to FESS operations. The same surgeon performed the operation in each patient. In one nasal fossa, surgery was performed using conventional video FESS methods. The contralateral side was operated on while wearing the head mounted video display. The device had adequate resolution for the purposes of FESS. No adverse effects were noted intraoperatively. The results on the patients ipsalateral and contralateral sides were similar. The visor did eliminated significant torsion of the surgeon's neck during the operation, while at the same time permitted simultaneous viewing of both the patient and the intranasal surgical field.

  4. [Telemedicine with digital video transport system].

    Science.gov (United States)

    Hahm, Joon Soo; Shimizu, Shuji; Nakashima, Naoki; Byun, Tae Jun; Lee, Hang Lak; Choi, Ho Soon; Ko, Yong; Lee, Kyeong Geun; Kim, Sun Il; Kim, Tae Eun; Yun, Jiwon; Park, Yong Jin

    2004-06-01

    The growth of technology based on internet protocol has affected on the informatics and automatic controls of medical fields. The aim of this study was to establish the telemedical educational system by developing the high quality image transfer using the DVTS (digital video transmission system) on the high-speed internet network. Using telemedicine, we were able to send surgical images not only to domestic areas but also to international area. Moreover, we could discuss the condition of surgical procedures in the operation room and seminar room. The Korean-Japan cable network (KJCN) was structured in the submarine between Busan and Fukuoka. On the other hand, the Korea advanced research network (KOREN) was used to connect between Busan and Seoul. To link the image between the Hanyang University Hospital in Seoul and Kyushu University Hospital in Japan, we started teleconference system and recorded image-streaming system with DVTS on the circumstance with IPv4 network. Two operative cases were transmitted successfully. We could keep enough bandwidth of 60 Mbps for two-line transmission. The quality of transmitted moving image had no frame loss with the rate 30 per second. The sound was also clear and the time delay was less than 0.3 sec. Our study has demonstrated the feasibility of domestic and international telemedicine. We have established an international medical network with high-quality video transmission over internet protocol. It is easy to perform, reliable, and also economical. Thus, it will be a promising tool in remote medicine for worldwide telemedical communication in the future.

  5. A review of video security training and assessment-systems and their applications

    International Nuclear Information System (INIS)

    Cellucci, J.; Hall, R.J.

    1991-01-01

    This paper reports that during the last 10 years computer-aided video data collection and playback systems have been used as nuclear facility security training and assessment tools with varying degrees of success. These mobile systems have been used by trained security personnel for response force training, vulnerability assessment, force-on-force exercises and crisis management. Typically, synchronous recordings from multiple video cameras, communications audio, and digital sensor inputs; are played back to the exercise participants and then edited for training and briefing. Factors that have influence user acceptance include: frequency of use, the demands placed on security personnel, fear of punishment, user training requirements and equipment cost. The introduction of S-VHS video and new software for scenario planning, video editing and data reduction; should bring about a wider range of security applications and supply the opportunity for significant cost sharing with other user groups

  6. Secure Video Surveillance System (SVSS) for unannounced safeguards inspections

    International Nuclear Information System (INIS)

    Galdoz, Erwin G.; Pinkalla, Mark

    2010-01-01

    The Secure Video Surveillance System (SVSS) is a collaborative effort between the U.S. Department of Energy (DOE), Sandia National Laboratories (SNL), and the Brazilian-Argentine Agency for Accounting and Control of Nuclear Materials (ABACC). The joint project addresses specific requirements of redundant surveillance systems installed in two South American nuclear facilities as a tool to support unannounced inspections conducted by ABACC and the International Atomic Energy Agency (IAEA). The surveillance covers the critical time (as much as a few hours) between the notification of an inspection and the access of inspectors to the location in facility where surveillance equipment is installed. ABACC and the IAEA currently use the EURATOM Multiple Optical Surveillance System (EMOSS). This outdated system is no longer available or supported by the manufacturer. The current EMOSS system has met the project objective; however, the lack of available replacement parts and system support has made this system unsustainable and has increased the risk of an inoperable system. A new system that utilizes current technology and is maintainable is required to replace the aging EMOSS system. ABACC intends to replace one of the existing ABACC EMOSS systems by the Secure Video Surveillance System. SVSS utilizes commercial off-the shelf (COTS) technologies for all individual components. Sandia National Laboratories supported the system design for SVSS to meet Safeguards requirements, i.e. tamper indication, data authentication, etc. The SVSS consists of two video surveillance cameras linked securely to a data collection unit. The collection unit is capable of retaining historical surveillance data for at least three hours with picture intervals as short as 1sec. Images in .jpg format are available to inspectors using various software review tools. SNL has delivered two SVSS systems for test and evaluation at the ABACC Safeguards Laboratory. An additional 'proto-type' system remains

  7. Development of design window evaluation and display system. 1. System development and performance confirmation

    International Nuclear Information System (INIS)

    Muramatsu, Toshiharu; Yamaguchi, Akira

    2003-07-01

    Purpose: The work was performed to develop a design window evaluation and display system for the purpose of obtaining the effects of various design parameters on the typical thermal hydraulic issues resulting from a use of various kind of working fluid etc. easily. Method: The function of the system were 'confirmation of design margin' of the present design, 'confirmation of the affected design zone' when a designer changed some design parameter, and search for an design improvement' for design optimization. The system was developed using existing soft wares on PC and the database relating analytical results of typical thermal hydraulic issues provided by separate work. Results: (1) System design: In order to develop a design window evaluation and display system, 'numerical analysis unit', 'statistical analysis unit', 'MMI unit', 'optimization unit' were designed based on the result of selected optimization procedure and display visualization. Further, total system design was performed combining these units. Typical thermal hydraulic issues to be considered are upper plenum thermal hydraulics, thermal stratification, free surface sloshing, flow-induced vibration of a heat exchanger and thermal striping in the T-junction piping systems. (2) Development of prototype system and a functional check: A prototype system of a design window evaluation and display system was developed and the functions were confirmed as was planned. (author)

  8. Flatbed-type 3D display systems using integral imaging method

    Science.gov (United States)

    Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki

    2006-10-01

    We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.

  9. Design of batch audio/video conversion platform based on JavaEE

    Science.gov (United States)

    Cui, Yansong; Jiang, Lianpin

    2018-03-01

    With the rapid development of digital publishing industry, the direction of audio / video publishing shows the diversity of coding standards for audio and video files, massive data and other significant features. Faced with massive and diverse data, how to quickly and efficiently convert to a unified code format has brought great difficulties to the digital publishing organization. In view of this demand and present situation in this paper, basing on the development architecture of Sptring+SpringMVC+Mybatis, and combined with the open source FFMPEG format conversion tool, a distributed online audio and video format conversion platform with a B/S structure is proposed. Based on the Java language, the key technologies and strategies designed in the design of platform architecture are analyzed emphatically in this paper, designing and developing a efficient audio and video format conversion system, which is composed of “Front display system”, "core scheduling server " and " conversion server ". The test results show that, compared with the ordinary audio and video conversion scheme, the use of batch audio and video format conversion platform can effectively improve the conversion efficiency of audio and video files, and reduce the complexity of the work. Practice has proved that the key technology discussed in this paper can be applied in the field of large batch file processing, and has certain practical application value.

  10. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  11. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  12. Lighting Control System for Premises with Display Screen Equipment

    Science.gov (United States)

    Kudryashov, A. V.

    2017-11-01

    The use of Display Screen Equipment (DSE) at enterprises allows one to increase the productivity and safety of production, minimize the number of personnel and leads to the simplification of the work of specialists, but on the other side, changes usual working conditions. If the personnel works with displays, visual fatigue develops more quickly which contributes to the emergence of nervous tension, stress and possible erroneous actions. Low interest of the lighting control system developers towards the rooms with displays is dictated by special requirements for coverage by sanitary and hygienic standards (limiting excess workplace illumination). We decided to create a combined lighting system which works considering daylight illumination and artificial light sources. The brightness adjustment of the LED lamps is carried out according to the DALI protocol, adjustment of the natural illumination by means of smart glasses. The technical requirements for a lighting control system, the structural-functional scheme and the algorithm for controlling the operation of the system have been developed. The elements of control units, sensors and actuators have been selected.

  13. Development of a video image-based QA system for the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system

    Energy Technology Data Exchange (ETDEWEB)

    Ebe, Kazuyu, E-mail: nrr24490@nifty.com; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji [Joetsu General Hospital, 616 Daido-Fukuda, Joetsu-shi, Niigata 943-8507 (Japan); Sugimoto, Satoru [Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo 113-8421 (Japan); Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi [Graduate School of Medical and Dental Sciences, Niigata University, Niigata 951-8510 (Japan); Court, Laurence [The University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States)

    2015-08-15

    Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors

  14. Development of a video image-based QA system for the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system

    International Nuclear Information System (INIS)

    Ebe, Kazuyu; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji; Sugimoto, Satoru; Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi; Court, Laurence

    2015-01-01

    Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors

  15. LOFT advanced control room operator diagnostic and display system (ODDS)

    International Nuclear Information System (INIS)

    Larsen, D.G.; Robb, T.C.

    1980-01-01

    The Loss-of-Fluid Test (LOFT) Reactor Facility in Idaho includes a highly instrumented nuclear reactor operated by the Department of Energy for the purpose of establishing nuclear safety requirements. The results of the development and installation into LOFT of an Operator Diagnostic and Display System (ODDS) are presented. The ODDS is a computer-based graphics display system centered around a PRIME 550 computer with several RAMTEK color graphic display units located within the control room and available to the reactor operators. Use of computer-based color graphics to aid the reactor operator is discussed. A detailed hardware description of the LOFT data system and the ODDS is presented. Methods and problems of backfitting the ODDS equipment into the LOFT plant are discussed

  16. Applying emerging digital video interface standards to airborne avionics sensor and digital map integrations: benefits outweigh the initial costs

    Science.gov (United States)

    Kuehl, C. Stephen

    1996-06-01

    Video signal system performance can be compromised in a military aircraft cockpit management system (CMS) with the tailoring of vintage Electronics Industries Association (EIA) RS170 and RS343A video interface standards. Video analog interfaces degrade when induced system noise is present. Further signal degradation has been traditionally associated with signal data conversions between avionics sensor outputs and the cockpit display system. If the CMS engineering process is not carefully applied during the avionics video and computing architecture development, extensive and costly redesign will occur when visual sensor technology upgrades are incorporated. Close monitoring and technical involvement in video standards groups provides the knowledge-base necessary for avionic systems engineering organizations to architect adaptable and extendible cockpit management systems. With the Federal Communications Commission (FCC) in the process of adopting the Digital HDTV Grand Alliance System standard proposed by the Advanced Television Systems Committee (ATSC), the entertainment and telecommunications industries are adopting and supporting the emergence of new serial/parallel digital video interfaces and data compression standards that will drastically alter present NTSC-M video processing architectures. The re-engineering of the U.S. Broadcasting system must initially preserve the electronic equipment wiring networks within broadcast facilities to make the transition to HDTV affordable. International committee activities in technical forums like ITU-R (former CCIR), ANSI/SMPTE, IEEE, and ISO/IEC are establishing global consensus on video signal parameterizations that support a smooth transition from existing analog based broadcasting facilities to fully digital computerized systems. An opportunity exists for implementing these new video interface standards over existing video coax/triax cabling in military aircraft cockpit management systems. Reductions in signal

  17. Discontinuity minimization for omnidirectional video projections

    Science.gov (United States)

    Alshina, Elena; Zakharchenko, Vladyslav

    2017-09-01

    Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.

  18. Prism-based single-camera system for stereo display

    Science.gov (United States)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  19. High color fidelity thin film multilayer systems for head-up display use

    Science.gov (United States)

    Tsou, Yi-Jen D.; Ho, Fang C.

    1996-09-01

    Head-up display is gaining increasing access in automotive vehicles for indication and position/navigation purposes. An optical combiner, which allows the driver to receive image information from outside and inside of the automobile, is the essential part of this display device. Two multilayer thin film combiner coating systems with distinctive polarization selectivity and broad band spectral neutrality are discussed. One of the coating systems was designed to be located at the lower portion of the windshield. The coating reduced the exterior glare by approximately 45% and provided about 70% average see-through transmittance in addition to the interior information display. The other coating system was designed to be integrated with the sunshield located at the upper portion of the windshield. The coating reflected the interior information display while reducing direct sunlight penetration to 25%. Color fidelity for both interior and exterior images were maintained in both systems. This facilitated the display of full-color maps. Both coating systems were absorptionless and environmentally durable. Designs, fabrication, and performance of these coating systems are addressed.

  20. A rule-based expert system for generating control displays at the Advanced Photon Source

    International Nuclear Information System (INIS)

    Coulter, K.J.

    1993-01-01

    The integration of a rule-based expert system for generating screen displays for controlling and monitoring instrumentation under the Experimental Physics and Industrial Control System (EPICS) is presented. The expert system is implemented using CLIPS, an expert system shell from the Software Technology Branch at Lyndon B. Johnson Space Center. The user selects the hardware input and output to be displayed and the expert system constructs a graphical control screen appropriate for the data. Such a system provides a method for implementing a common look and feel for displays created by several different users and reduces the amount of time required to create displays for new hardware configurations. Users are able to modify the displays as needed using the EPICS display editor tool

  1. A rule-based expert system for generating control displays at the advanced photon source

    International Nuclear Information System (INIS)

    Coulter, K.J.

    1994-01-01

    The integration of a rule-based expert system for generating screen displays for controlling and monitoring instrumentation under the Experimental Physics and Industrial Control System (EPICS) is presented. The expert system is implemented using CLIPS, an expert system shell from the Software Technology Branch at Lyndon B. Johnson Space Center. The user selects the hardware input and output to be displayed and the expert system constructs a graphical control screen appropriate for the data. Such a system provides a method for implementing a common look and feel for displays created by several different users and reduces the amount of time required to create displays for new hardware configurations. Users are able to modify the displays as needed using the EPICS display editor tool. ((orig.))

  2. Enabling MPEG-2 video playback in embedded systems through improved data cache efficiency

    Science.gov (United States)

    Soderquist, Peter; Leeser, Miriam E.

    1999-01-01

    Digital video decoding, enabled by the MPEG-2 Video standard, is an important future application for embedded systems, particularly PDAs and other information appliances. Many such system require portability and wireless communication capabilities, and thus face severe limitations in size and power consumption. This places a premium on integration and efficiency, and favors software solutions for video functionality over specialized hardware. The processors in most embedded system currently lack the computational power needed to perform video decoding, but a related and equally important problem is the required data bandwidth, and the need to cost-effectively insure adequate data supply. MPEG data sets are very large, and generate significant amounts of excess memory traffic for standard data caches, up to 100 times the amount required for decoding. Meanwhile, cost and power limitations restrict cache sizes in embedded systems. Some systems, including many media processors, eliminate caches in favor of memories under direct, painstaking software control in the manner of digital signal processors. Yet MPEG data has locality which caches can exploit if properly optimized, providing fast, flexible, and automatic data supply. We propose a set of enhancements which target the specific needs of the heterogeneous types within the MPEG decoder working set. These optimizations significantly improve the efficiency of small caches, reducing cache-memory traffic by almost 70 percent, and can make an enhanced 4 KB cache perform better than a standard 1 MB cache. This performance improvement can enable high-resolution, full frame rate video playback in cheaper, smaller system than woudl otherwise be possible.

  3. Design characteristics of safety parameter display system for nuclear power plants

    International Nuclear Information System (INIS)

    Zhang Yuangfang

    1992-02-01

    The design features of safety parameter display system (SPDS) developed by Tsinghua University is introduced. Some new features have been added into the system functions and they are: (1) hierarchical display structure; (2) human factor in the display format design; (3)automatic diagnosis of safety status of nuclear power plant; (4) extension of SPDS use scope; (5) flexible hardware structure. The new approaches in the design are: (1)adopting the international design standards; (2) selecting safety parameters strictly; (3) developing software under multitask operating system; (4) using a nuclear power plant simulator to verify the SPDS design

  4. Monocular display unit for 3D display with correct depth perception

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  5. MEKANISME SEGMENTASI LAJU BIT PADA DYNAMIC ADAPTIVE STREAMING OVER HTTP (DASH UNTUK APLIKASI VIDEO STREAMING

    Directory of Open Access Journals (Sweden)

    Muhammad Audy Bazly

    2015-12-01

    Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG- DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views

  6. Development and setting of a time-lapse video camera system for the Antarctic lake observation

    Directory of Open Access Journals (Sweden)

    Sakae Kudoh

    2010-11-01

    Full Text Available A submersible video camera system, which aimed to record the growth image of aquatic vegetation in Antarctic lakes for one year, was manufactured. The system consisted of a video camera, a programmable controller unit, a lens-cleaning wiper with a submersible motor, LED lights, and a lithium ion battery unit. Changes of video camera (High Vision System and modification of the lens-cleaning wiper allowed higher sensitivity and clearer recording images compared to the previous submersible video without increasing the power consumption. This system was set on the lake floor in Lake Naga Ike (a tentative name in Skarvsnes in Soya Coast, during the summer activity of the 51th Japanese Antarctic Research Expedition. Interval record of underwater visual image for one year have been started by our diving operation.

  7. Cyber Security Test Strategy for Non-safety Display System

    International Nuclear Information System (INIS)

    Son, Han Seong; Kim, Hee Eun

    2016-01-01

    Cyber security has been a big issue since the instrumentation and control (I and C) system of nuclear power plant (NPP) is digitalized. A cyber-attack on NPP should be dealt with seriously because it might cause not only economic loss but also the radioactive material release. Researches on the consequences of cyber-attack onto NPP from a safety point of view have been conducted. A previous study shows the risk effect brought by initiation of event and deterioration of mitigation function by cyber terror. Although this study made conservative assumptions and simplifications, it gives an insight on the effect of cyber-attack. Another study shows that the error on a non-safety display system could cause wrong actions of operators. According to this previous study, the failure of the operator action caused by a cyber-attack on a display system might threaten the safety of the NPP by limiting appropriate mitigation actions. This study suggests a test strategy focusing on the cyber-attack on the information and display system, which might cause the failure of operator. The test strategy can be suggested to evaluate and complement security measures. Identifying whether a cyber-attack on the information and display system can affect the mitigation actions of operator, the strategy to obtain test scenarios is suggested. The failure of mitigation scenario is identified first. Then, for the test target in the scenario, software failure modes are applied to identify realistic failure scenarios. Testing should be performed for those scenarios to confirm the integrity of data and to assure effectiveness of security measures

  8. Cyber Security Test Strategy for Non-safety Display System

    Energy Technology Data Exchange (ETDEWEB)

    Son, Han Seong [Joongbu University, Geumsan (Korea, Republic of); Kim, Hee Eun [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    Cyber security has been a big issue since the instrumentation and control (I and C) system of nuclear power plant (NPP) is digitalized. A cyber-attack on NPP should be dealt with seriously because it might cause not only economic loss but also the radioactive material release. Researches on the consequences of cyber-attack onto NPP from a safety point of view have been conducted. A previous study shows the risk effect brought by initiation of event and deterioration of mitigation function by cyber terror. Although this study made conservative assumptions and simplifications, it gives an insight on the effect of cyber-attack. Another study shows that the error on a non-safety display system could cause wrong actions of operators. According to this previous study, the failure of the operator action caused by a cyber-attack on a display system might threaten the safety of the NPP by limiting appropriate mitigation actions. This study suggests a test strategy focusing on the cyber-attack on the information and display system, which might cause the failure of operator. The test strategy can be suggested to evaluate and complement security measures. Identifying whether a cyber-attack on the information and display system can affect the mitigation actions of operator, the strategy to obtain test scenarios is suggested. The failure of mitigation scenario is identified first. Then, for the test target in the scenario, software failure modes are applied to identify realistic failure scenarios. Testing should be performed for those scenarios to confirm the integrity of data and to assure effectiveness of security measures.

  9. Developing Agent-Oriented Video Surveillance System through Agent-Oriented Methodology (AOM

    Directory of Open Access Journals (Sweden)

    Cheah Wai Shiang

    2016-12-01

    Full Text Available Agent-oriented methodology (AOM is a comprehensive and unified agent methodology for agent-oriented software development. Although AOM is claimed to be able to cope with a complex system development, it is still not yet determined up to what extent this may be true. Therefore, it is vital to conduct an investigation to validate this methodology. This paper presents the adoption of AOM in developing an agent-oriented video surveillance system (VSS. An intruder handling scenario is designed and implemented through AOM. AOM provides an alternative method to engineer a distributed security system in a systematic manner. It presents the security system at a holistic view; provides a better conceptualization of agent-oriented security system and supports rapid prototyping as well as simulation of video surveillance system.

  10. A Client-Server System for Ubiquitous Video Service

    Directory of Open Access Journals (Sweden)

    Ronit Nossenson

    2012-12-01

    Full Text Available In this work we introduce a simple client-server system architecture and algorithms for ubiquitous live video and VOD service support. The main features of the system are: efficient usage of network resources, emphasis on user personalization, and ease of implementation. The system supports many continuous service requirements such as QoS provision, user mobility between networks and between different communication devices, and simultaneous usage of a device by a number of users.

  11. Energy-dependent imaging in digital radiography: a review on acquisition, processing and display technique

    International Nuclear Information System (INIS)

    Coppini, G.; Maltinti, G.; Valli, G.; Baroni, M.; Buchignan, M.; Valli, G.

    1986-01-01

    The capabilities of energy-dependent imaging in digital radiography are analyzed paying particular attention to digital video systems. The main techniques developed in recent years for selective energy imaging are reviewed following a unified approach. Discussion about advantages and limits of energy methods is carried out by a comparative analysis of computer simulated data and experimental results as obtained by standard x-ray equipments coupled to a digital video unit. Geometric phantoms are used as test object, as also images of a chest phantom are produced. Since signal-to-noise ratio degradation is one of the major problems when dealing with selective imaging, a particular effort is made to investigate noise effects. In this perspective, an original colour encoding display of energy sequences is presented. By mapping the various energy measurements on different colour bands (typically those of an RGB TV-monitor), an increased image conspicuity is obtained without a significant noise degradation: this is ensured by the energy dependence of attenuation coefficients and by the integrating characteristics of the display device

  12. A distributed system of wireless signs using Gyricon electronic paper displays

    Science.gov (United States)

    Sprague, Robert A.

    2006-04-01

    The proliferation of digital information is leading to a wide range of applications which make it desirable to display data easily in many locations, all changeable and updateable. The difficulty in achieving such ubiquitous displays is the cost of signage, the cost of installation, and the software and systems to control the information being sent to each of these signs. In this paper we will talk about a networked system of such signs which are made from gyricon electronic paper. Gyricon electronic paper is a reflective, bistable display which can be made in large web sheets at a reasonable price. Since it does not require a backlight nor does it require power to refresh the display image, such technology is ideal for making signs which can be run on batteries with extremely long battery life, often not needing replacement for years. The display also has a very broad illumination scattering profile which makes it readily viewable from any angle. The basic operating mechanism of the display, its manufacturing technique, and achieved performance will be described, along with the description of a networked solution using many such signs controlled with system software to identify speakers and meetings in conference rooms, hospitality suites, or classrooms in universities. Systems will also be shown which are adapted to retail pricing signage and others which can be used for large format outdoor billboards.

  13. HTML 5 Displays for On-Board Flight Systems

    Science.gov (United States)

    Silva, Chandika

    2016-01-01

    During my Internship at NASA in the summer of 2016, I was assigned to a project which dealt with developing a web-server that would display telemetry and other system data using HTML 5, JavaScript, and CSS. By doing this, it would be possible to view the data across a variety of screen sizes, and establish a standard that could be used to simplify communication and software development between NASA and other countries. Utilizing a web- approach allowed us to add in more functionality, as well as make the displays more aesthetically pleasing for the users. When I was assigned to this project my main task was to first establish communication with the current display server. This display server would output data from the on-board systems in XML format. Once communication was established I was then asked to create a dynamic telemetry table web page that would update its header and change as new information came in. After this was completed, certain minor functionalities were added to the table such as a hide column and filter by system option. This was more for the purpose of making the table more useful for the users, as they can now filter and view relevant data. Finally my last task was to create a graphical system display for all the systems on the space craft. This was by far the most challenging part of my internship as finding a JavaScript library that was both free and contained useful functions to assist me in my task was difficult. In the end I was able to use the JointJs library and accomplish the task. With the help of my mentor and the HIVE lab team, we were able to establish stable communication with the display server. We also succeeded in creating a fully dynamic telemetry table and in developing a graphical system display for the advanced modular power system. Working in JSC for this internship has taught me a lot about coding in JavaScript and HTML 5. I was also introduced to the concept of developing software as a team, and exposed to the different

  14. Real-time graphic display system for ROSA-V Large Scale Test Facility

    International Nuclear Information System (INIS)

    Kondo, Masaya; Anoda, Yoshinari; Osaki, Hideki; Kukita, Yutaka; Takigawa, Yoshio.

    1993-11-01

    A real-time graphic display system was developed for the ROSA-V Large Scale Test Facility (LSTF) experiments simulating accident management measures for prevention of severe core damage in pressurized water reactors (PWRs). The system works on an IBM workstation (Power Station RS/6000 model 560) and accommodates 512 channels out of about 2500 total measurements in the LSTF. It has three major functions: (a) displaying the coolant inventory distribution in the facility primary and secondary systems; (b) displaying the measured quantities at desired locations in the facility; and (c) displaying the time histories of measured quantities. The coolant inventory distribution is derived from differential pressure measurements along vertical sections and gamma-ray densitometer measurements for horizontal legs. The color display indicates liquid subcooling calculated from pressure and temperature at individual locations. (author)

  15. A Multi-Mode Video Driver for a High Resolution LCoS Display

    OpenAIRE

    Farrell, Ronan; Jacob, Mark; Maher, Roger

    2000-01-01

    This paper describes the design of a display driver for Liquid Crystal on Silicon (LCoS) microdisplays. These are high resolution reflective display devices which allow up to 1280x1024 pixels on an area of 3.75cm2, and are typically refreshed at 120Hz. The required driver consists of a digital section capable of taking the common display formats such as SVGA and new formats, SXGA, and processing these to a common 120HzRGB signal, requiring an output rate of 160 mega-pixels/second. This signal...

  16. A System based on Adaptive Background Subtraction Approach for Moving Object Detection and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Bahadır KARASULU

    2013-04-01

    Full Text Available Video surveillance systems are based on video and image processing research areas in the scope of computer science. Video processing covers various methods which are used to browse the changes in existing scene for specific video. Nowadays, video processing is one of the important areas of computer science. Two-dimensional videos are used to apply various segmentation and object detection and tracking processes which exists in multimedia content-based indexing, information retrieval, visual and distributed cross-camera surveillance systems, people tracking, traffic tracking and similar applications. Background subtraction (BS approach is a frequently used method for moving object detection and tracking. In the literature, there exist similar methods for this issue. In this research study, it is proposed to provide a more efficient method which is an addition to existing methods. According to model which is produced by using adaptive background subtraction (ABS, an object detection and tracking system’s software is implemented in computer environment. The performance of developed system is tested via experimental works with related video datasets. The experimental results and discussion are given in the study

  17. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  18. Computer-generated display system guidelines. Volume 2. Developing an evaluation plan

    International Nuclear Information System (INIS)

    1984-09-01

    Volume 1 of this report provides guidance to utilities on the design of displays and the selection and retrofit of a computer-generated display system in the control room of an operating nuclear power plant. Volume 2 provides guidance on planning and managing empirical evaluation of computer-generated display systems, particularly when these displays are primary elements of computer-based operator aids. The guidance provided is in terms of a multilevel evaluation methodology that enables sequential consideration of three primary issues: (1) compatibility; (2) understandability; and (3) effectiveness. The evaluation process approaches these three issues with a top-down review of system objectives, functions, tasks, and information requirements. The process then moves bottom-up from lower-level to higher-level issues, employing different evaluation methods at each level in order to maximize the efficiency and effectiveness of the evaluation process

  19. Measurement and protocol for evaluating video and still stabilization systems

    Science.gov (United States)

    Cormier, Etienne; Cao, Frédéric; Guichard, Frédéric; Viard, Clément

    2013-01-01

    This article presents a system and a protocol to characterize image stabilization systems both for still images and videos. It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel accuracy to determine a homographic deformation between the current frame and a reference position. This model describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as DSC, DSLR and smartphones.

  20. Breast Ultrasound Examination with Video Monitor System: A Satisfaction Survey among Patients

    International Nuclear Information System (INIS)

    Ryu, Jung Kyu; Kim, Hyun Cheol; Yang, Dal Mo

    2010-01-01

    The purpose of this study is to assess the patients satisfaction with a newly established video-monitor system and the associated basic items for performing breast ultrasound exams by conducting a survey among the patients. 349 patients were invited to take the survey and they had undergone breast ultrasound examination once during the 3 months after the monitor system has been introduced. The questionnaire was composed of 8 questions, 4 of which were about the basic items such as age, gender and the reason of their taking the breast ultrasound exam, their preference for the gender of the examiner and the desired length of time for the examination. The other 4 question were about their satisfaction with the video monitor. The patients were divided into two groups according to the purposes of taking the exams, which were screening or diagnostic purposes. The results were compared between these 2 groups. The satisfaction with the video monitor system was assessed by using a scoring system that ranged from 1 to 5. For the total patients, the screening group was composed of 124 patients and the diagnostic group was composed of 225. The reasons why the patients wanted to take the examinations in the diagnostic group varied. The questionnaire about the preference of the gender of the examiner showed that 81.5% in the screening group and 79.1% in the diagnostic group preferred a woman doctor. The required, suitable time for the breast ultrasound examination was 5 to 10 minutes or 10 to 15 minutes for about 70% of the patients. The mean satisfaction score for the video monitor system was as high as 3.95 point. The portion of patients in each group who answered over 3 points for their satisfaction with the monitor system was 88.7% and 94.2%, respectively. Our study showed that patients preferred 5-15 minutes for the length of the examination time and a female examiner. We also confirmed high patient satisfaction with the video monitor system

  1. Breast Ultrasound Examination with Video Monitor System: A Satisfaction Survey among Patients

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, Jung Kyu; Kim, Hyun Cheol; Yang, Dal Mo [East-West Neo Medical Center, Kyung-Hee University, Seoul (Korea, Republic of)

    2010-03-15

    The purpose of this study is to assess the patients satisfaction with a newly established video-monitor system and the associated basic items for performing breast ultrasound exams by conducting a survey among the patients. 349 patients were invited to take the survey and they had undergone breast ultrasound examination once during the 3 months after the monitor system has been introduced. The questionnaire was composed of 8 questions, 4 of which were about the basic items such as age, gender and the reason of their taking the breast ultrasound exam, their preference for the gender of the examiner and the desired length of time for the examination. The other 4 question were about their satisfaction with the video monitor. The patients were divided into two groups according to the purposes of taking the exams, which were screening or diagnostic purposes. The results were compared between these 2 groups. The satisfaction with the video monitor system was assessed by using a scoring system that ranged from 1 to 5. For the total patients, the screening group was composed of 124 patients and the diagnostic group was composed of 225. The reasons why the patients wanted to take the examinations in the diagnostic group varied. The questionnaire about the preference of the gender of the examiner showed that 81.5% in the screening group and 79.1% in the diagnostic group preferred a woman doctor. The required, suitable time for the breast ultrasound examination was 5 to 10 minutes or 10 to 15 minutes for about 70% of the patients. The mean satisfaction score for the video monitor system was as high as 3.95 point. The portion of patients in each group who answered over 3 points for their satisfaction with the monitor system was 88.7% and 94.2%, respectively. Our study showed that patients preferred 5-15 minutes for the length of the examination time and a female examiner. We also confirmed high patient satisfaction with the video monitor system

  2. Probable Effects Of Exposure To Electromagnetic Waves Emitted From Video Display Terminals On Ocular Functions

    International Nuclear Information System (INIS)

    Ahmed, M.A.

    2013-01-01

    There is growing body of evidence that usage of computers can adversely affect the visual health. Considering the rising number of computer users in Egypt, computer-related visual symptoms might take an epidemic form. In view of that, this study was undertaken to find out the magnitude of the visual problems in computer operators and its relationship with various personal and workplace factors. Aim: To evaluate the probable effects of exposure to electromagnetic waves radiated from visual display terminals on some visual functions. Subjects and Methods: hundred fifty computer operators working in different institutes were randomly selected. They were asked to fill a pre-tested questionnaire (written in Arabic), after obtaining their verbal consent. The selected exposed subjects were were subjected to the following clinical assessment: 1-Visual acuity measurements 2-Refraction (using autorefractometer). 3- Measurements of the ocular dryness defects using the following different diagnostic tests: Schirmer test-,Fluorescein staining , Rose Bengal staining, Tear Break Up Time (TBUT) and LIPCOF test (lid parallel conjunctival fold). A control group included hundred fifty participants, they are working in a field does not necessitate exposure to video display terminals. Inclusion criteria of the subjects were as follows: minimum three symptoms of computer vision syndrome (CVS), minimum one year exposure to (VDT, s) and minimum 6 hs/day in 5working days/week. Exclusion criteria included candidates having ocular pathology like: glaucoma, optic atrophy, diabetic retinopathy, papilledema The following complaints were studied: 1-Tired eyes. 2- Burning eyes with excessive tear production. 3-Dry sore eyes 4-Blurred near vision (letters on the screen run together). 5-Asthenopia. 6-Neck, shoulder and back aches, overall bodily fatigue or tiredness. An interventional protective measure for the selected subjects from the exposed group was administered, it included the following (1

  3. [Current situations and problems of quality control for medical imaging display systems].

    Science.gov (United States)

    Shibutani, Takayuki; Setojima, Tsuyoshi; Ueda, Katsumi; Takada, Katsumi; Okuno, Teiichi; Onoguchi, Masahisa; Nakajima, Tadashi; Fujisawa, Ichiro

    2015-04-01

    Diagnostic imaging has been shifted rapidly from film to monitor diagnostic. Consequently, Japan medical imaging and radiological systems industries association (JIRA) have recommended methods of quality control (QC) for medical imaging display systems. However, in spite of its need by majority of people, executing rate is low. The purpose of this study was to validate the problem including check items about QC for medical imaging display systems. We performed acceptance test of medical imaging display monitors based on Japanese engineering standards of radiological apparatus (JESRA) X-0093*A-2005 to 2009, and performed constancy test based on JESRA X-0093*A-2010 from 2010 to 2012. Furthermore, we investigated the cause of trouble and repaired number. Medical imaging display monitors had 23 inappropriate monitors about visual estimation, and all these monitors were not criteria of JESRA about luminance uniformity. Max luminance was significantly lower year-by-year about measurement estimation, and the 29 monitors did not meet the criteria of JESRA about luminance deviation. Repaired number of medical imaging display monitors had 25, and the cause was failure liquid crystal panel. We suggested the problems about medical imaging display systems.

  4. Jedi training: playful evaluation of head-mounted augmented reality display systems

    Science.gov (United States)

    Ozbek, Christopher S.; Giesler, Bjorn; Dillmann, Ruediger

    2004-05-01

    A fundamental decision in building augmented reality (AR) systems is how to accomplish the combining of the real and virtual worlds. Nowadays this key-question boils down to the two alternatives video-see-through (VST) vs. optical-see-through (OST). Both systems have advantages and disadvantages in areas like production-simplicity, resolution, flexibility in composition strategies, field of view etc. To provide additional decision criteria for high dexterity, accuracy tasks and subjective user-acceptance a gaming environment was programmed that allowed good evaluation of hand-eye coordination, and that was inspired by the Star Wars movies. During an experimentation session with more than thirty participants a preference for optical-see-through glasses in conjunction with infra-red-tracking was found. Especially the high-computational demand for video-capture, processing and the resulting drop in frame rate emerged as a key-weakness of the VST-system.

  5. VISDTA: A video imaging system for detection, tracking, and assessment: Prototype development and concept demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Pritchard, D.A.

    1987-05-01

    It has been demonstrated that thermal imagers are an effective surveillance and assessment tool for security applications because: (1) they work day or night due to their sensitivity to thermal signatures; (2) penetrability through fog, rain, dust, etc., is better than human eyes; (3) short or long range operation is possible with various optics; and (4) they are strictly passive devices providing visible imagery which is readily interpreted by the operator with little training. Unfortunately, most thermal imagers also require the setup of a tripod, connection of batteries, cables, display, etc. When this is accomplished, the operator must manually move the camera back and forth searching for signs of aggressor activity. VISDTA is designed to provide automatic panning, and in a sense, ''watch'' the imagery in place of the operator. The idea behind the development of VISDTA is to provide a small, portable, rugged system to automatically scan areas and detect targets by computer processing of images. It would use a thermal imager and possibly an intensified day/night TV camera, a pan/ tilt mount, and a computer for system control. If mounted on a dedicated vehicle or on a tower, VISDTA will perform video motion detection functions on incoming video imagery, and automatically scan predefined patterns in search of abnormal conditions which may indicate attempted intrusions into the field-of-regard. In that respect, VISDTA is capable of improving the ability of security forces to maintain security of a given area of interest by augmenting present techniques and reducing operator fatigue.

  6. Applied learning-based color tone mapping for face recognition in video surveillance system

    Science.gov (United States)

    Yew, Chuu Tian; Suandi, Shahrel Azmin

    2012-04-01

    In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.

  7. Computer system for nuclear power plant parameter display

    International Nuclear Information System (INIS)

    Stritar, A.; Klobuchar, M.

    1990-01-01

    The computer system for efficient, cheap and simple presentation of data on the screen of the personal computer is described. The display is in alphanumerical or graphical form. The system can be used for the man-machine interface in the process monitoring system of the nuclear power plant. It represents the third level of the new process computer system of the Nuclear Power Plant Krsko. (author)

  8. Virtual Video Prototyping of Pervasive Healthcare Systems

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Bossen, Claus; Madsen, Kim Halskov

    2002-01-01

    Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...... issues, since one cannot avoid paying attention to the physical, real-world constraints and to details in the usage-interaction between users and technology. From the users' perspective, during our evaluation of the virtual video prototype, we experienced how it enabled users to relate...

  9. 75 FR 75186 - Interview Room Video System Standard Special Technical Committee Request for Proposals for...

    Science.gov (United States)

    2010-12-02

    ... DEPARTMENT OF JUSTICE Office of Justice Programs [OJP (NIJ) Docket No. 1534] Interview Room Video System Standard Special Technical Committee Request for Proposals for Certification and Testing Expertise... Interview Room Video System Standard and corresponding certification program requirements. This work is...

  10. Minimalism context-aware displays.

    Science.gov (United States)

    Cai, Yang

    2004-12-01

    Despite the rapid development of cyber technologies, today we still have very limited attention and communication bandwidth to process the increasing information flow. The goal of the study is to develop a context-aware filter to match the information load with particular needs and capacities. The functions include bandwidth-resolution trade-off and user context modeling. From the empirical lab studies, it is found that the resolution of images can be reduced in order of magnitude if the viewer knows that he/she is looking for particular features. The adaptive display queue is optimized with real-time operational conditions and user's inquiry history. Instead of measuring operator's behavior directly, ubiquitous computing models are developed to anticipate user's behavior from the operational environment data. A case study of the video stream monitoring for transit security is discussed in the paper. In addition, the author addresses the future direction of coherent human-machine vision systems.

  11. Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration.

    Science.gov (United States)

    Su, Li-Ming; Vagvolgyi, Balazs P; Agarwal, Rahul; Reiley, Carol E; Taylor, Russell H; Hager, Gregory D

    2009-04-01

    To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use.

  12. A variable-collimation display system

    Science.gov (United States)

    Batchko, Robert; Robinson, Sam; Schmidt, Jack; Graniela, Benito

    2014-03-01

    Two important human depth cues are accommodation and vergence. Normally, the eyes accommodate and converge or diverge in tandem; changes in viewing distance cause the eyes to simultaneously adjust both focus and orientation. However, ambiguity between accommodation and vergence cues is a well-known limitation in many stereoscopic display technologies. This limitation also arises in state-of-the-art full-flight simulator displays. In current full-flight simulators, the out-the-window (OTW) display (i.e., the front cockpit window display) employs a fixed collimated display technology which allows the pilot and copilot to perceive the OTW training scene without angular errors or distortions; however, accommodation and vergence cues are limited to fixed ranges (e.g., ~ 20 m). While this approach works well for long-range, the ambiguity of depth cues at shorter range hinders the pilot's ability to gauge distances in critical maneuvers such as vertical take-off and landing (VTOL). This is the first in a series of papers on a novel, variable-collimation display (VCD) technology that is being developed under NAVY SBIR Topic N121-041 funding. The proposed VCD will integrate with rotary-wing and vertical take-off and landing simulators and provide accurate accommodation and vergence cues for distances ranging from approximately 3 m outside the chin window to ~ 20 m. A display that offers dynamic accommodation and vergence could improve pilot safety and training, and impact other applications presently limited by lack of these depth cues.

  13. : Light Steering Projection Systems and Attributes for HDR Displays

    KAUST Repository

    Damberg, Gerwin

    2017-06-02

    New light steering projectors in cinema form images by moving light away from dark regions into bright areas of an image. In these systems, the peak luminance of small features can far exceed full screen white luminance. In traditional projectors where light is filtered or blocked in order to give shades of gray (or colors), the peak luminance is fixed. The luminance of chromatic features benefit in the same way as white features, and chromatic image details can be reproduced at high brightness leading to a much wider overall color gamut coverage than previously possible. Projectors of this capability are desired by the creative community to aid in and enhance storytelling. Furthermore, reduced light source power requirements of light steering projectors provide additional economic and environmental benefits. While the dependency of peak luminance level on (bright) image feature size is new in the digital cinema space, display technologies with identical characteristics such as OLED, LED LCD and Plasma TVs are well established in the home. Similarly, direct view LED walls are popular in events, advertising and architectural markets. To enable consistent color reproduction across devices in today’s content production pipelines, models that describe modern projectors and display attributes need to evolve together with HDR standards and available metadata. This paper is a first step towards rethinking legacy display descriptors such as contrast, peak luminance and color primaries in light of new display technology. We first summarize recent progress in the field of light steering projectors in cinema and then, based on new projector and existing display characteristics propose the inclusion of two simple display attributes: Maximum Average Luminance and Peak (Color) Primary Luminance. We show that the proposed attributes allow a better prediction of content reproducibility on HDR displays. To validate this assertion, we test professional content on a commercial HDR

  14. 47 CFR 63.02 - Exemptions for extensions of lines and for systems for the delivery of video programming.

    Science.gov (United States)

    2010-10-01

    ... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. [64...

  15. Video game players show more precise multisensory temporal processing abilities.

    Science.gov (United States)

    Donohue, Sarah E; Woldorff, Marty G; Mitroff, Stephen R

    2010-05-01

    Recent research has demonstrated enhanced visual attention and visual perception in individuals with extensive experience playing action video games. These benefits manifest in several realms, but much remains unknown about the ways in which video game experience alters perception and cognition. In the present study, we examined whether video game players' benefits generalize beyond vision to multisensory processing by presenting auditory and visual stimuli within a short temporal window to video game players and non-video game players. Participants performed two discrimination tasks, both of which revealed benefits for video game players: In a simultaneity judgment task, video game players were better able to distinguish whether simple visual and auditory stimuli occurred at the same moment or slightly offset in time, and in a temporal-order judgment task, they revealed an enhanced ability to determine the temporal sequence of multisensory stimuli. These results suggest that people with extensive experience playing video games display benefits that extend beyond the visual modality to also impact multisensory processing.

  16. Baited remote underwater video system (BRUVs) survey of ...

    African Journals Online (AJOL)

    This is the first baited remote underwater video system (BRUVs) survey of the relative abundance, diversity and seasonal distribution of chondrichthyans in False Bay. Nineteen species from 11 families were recorded across 185 sites at between 4 and 49 m depth. Diversity was greatest in summer, on reefs and in shallow ...

  17. Artificial intelligence enhancements to safety parameter display systems

    International Nuclear Information System (INIS)

    Hajek, B.K.; Hashemi, S.; Sharma, D.; Chandrasekaran, B.; Miller, D.W.

    1986-01-01

    Two prototype knowledge based systems have been developed at The Ohio State University to be the basis of an operator aid that can be attached to an existing nuclear power plant Safety Parameter Display System. The first system uses improved sensor validation techniques to provide input to a fault diagnosis process. The second system would use the diagnostic system output to synthesize corrective procedures to aid the control room licensed operator in plant recovery

  18. Super VHS video cassette recorder, A-SB88; Super VHS video A-SB88

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    A super VHS video cassette recorder, A-SB88, was commercialized having no compromised aspects at all in picture quality, sound quality, operability, energy conservation, design, etc. In the picture quality, the VCR is mounted with the S-ET system capable of realizing a quality comparable to SVHS with a three-dimensional Y/C detached circuit for dynamic moving image detection, three-dimensional DNR(digital noise reduction) and TBC(time base corrector), FE(flying erase) circuit, and a normal tape. In the operability, it is provided with a remote control transfer in large LCD, 400x high speed rewind, reservation system capable of simply reserving a serial drama for example, and a function for searching the end of picture recording; also, in the environmental aspect, the stand-by power consumption was reduced to 1/10 of conventional models (ratio with Toshiba A-BS6 at display power off). (translated by NEDO)

  19. A gaze-contingent display to study contrast sensitivity under natural viewing conditions

    Science.gov (United States)

    Dorr, Michael; Bex, Peter J.

    2011-03-01

    Contrast sensitivity has been extensively studied over the last decades and there are well-established models of early vision that were derived by presenting the visual system with synthetic stimuli such as sine-wave gratings near threshold contrasts. Natural scenes, however, contain a much wider distribution of orientations, spatial frequencies, and both luminance and contrast values. Furthermore, humans typically move their eyes two to three times per second under natural viewing conditions, but most laboratory experiments require subjects to maintain central fixation. We here describe a gaze-contingent display capable of performing real-time contrast modulations of video in retinal coordinates, thus allowing us to study contrast sensitivity when dynamically viewing dynamic scenes. Our system is based on a Laplacian pyramid for each frame that efficiently represents individual frequency bands. Each output pixel is then computed as a locally weighted sum of pyramid levels to introduce local contrast changes as a function of gaze. Our GPU implementation achieves real-time performance with more than 100 fps on high-resolution video (1920 by 1080 pixels) and a synthesis latency of only 1.5ms. Psychophysical data show that contrast sensitivity is greatly decreased in natural videos and under dynamic viewing conditions. Synthetic stimuli therefore only poorly characterize natural vision.

  20. Three-dimensional (3-D) video systems: bi-channel or single-channel optics?

    Science.gov (United States)

    van Bergen, P; Kunert, W; Buess, G F

    1999-11-01

    This paper presents the results of a comparison between two different three-dimensional (3-D) video systems, one with single-channel optics, the other with bi-channel optics. The latter integrates two lens systems, each transferring one half of the stereoscopic image; the former uses only one lens system, similar to a two-dimensional (2-D) endoscope, which transfers the complete stereoscopic picture. In our training centre for minimally invasive surgery, surgeons were involved in basic and advanced laparoscopic courses using both a 2-D system and the two 3-D video systems. They completed analog scale questionnaires in order to record a subjective impression of the relative convenience of operating in 2-D and 3-D vision, and to identify perceived deficiencies in the 3-D system. As an objective test, different experimental tasks were developed, in order to measure performance times and to count pre-defined errors made while using the two 3-D video systems and the 2-D system. Using the bi-channel optical system, the surgeon has a heightened spatial perception, and can work faster and more safely than with a single-channel system. However, single-channel optics allow the use of an angulated endoscope, and the free rotation of the optics relative to the camera, which is necessary for some operative applications.

  1. A Simple FSPN Model of P2P Live Video Streaming System

    OpenAIRE

    Kotevski, Zoran; Mitrevski, Pece

    2011-01-01

    Peer to Peer (P2P) live streaming is relatively new paradigm that aims at streaming live video to large number of clients at low cost. Many such applications already exist in the market, but, prior to creating such system it is necessary to analyze its performance via representative model that can provide good insight in the system’s behavior. Modeling and performance analysis of P2P live video streaming systems is challenging task which requires addressing many properties and issues of P2P s...

  2. Manageable and Extensible Video Streaming Systems for On-Line Monitoring of Remote Laboratory Experiments

    Directory of Open Access Journals (Sweden)

    Jian-Wei Lin

    2009-08-01

    Full Text Available To enable clients to view real-time video of the involved instruments during a remote experiment, two real-time video streaming systems are devised. One is for the remote experiments which instruments locate in one geographic spot and the other is for those which instruments scatter over different places. By means of running concurrent streaming processes at a server, multiple instruments can be monitored simultaneously by different clients. The proposed systems possess excellent extensibility, that is, the systems can easily add new digital cameras for instruments without modifying any software. Also they are well-manageable, meaning that an administrator can conveniently adjust the quality of the real-time video depending on system load and visual requirements. Finally, some evaluation concerning CPU utilization and bandwidth consumption of the systems have been evaluated to verify the effectiveness of the proposed solutions.

  3. Detection of Visual Events in Underwater Video Using a Neuromorphic Saliency-based Attention System

    Science.gov (United States)

    Edgington, D. R.; Walther, D.; Cline, D. E.; Sherlock, R.; Salamy, K. A.; Wilson, A.; Koch, C.

    2003-12-01

    The Monterey Bay Aquarium Research Institute (MBARI) uses high-resolution video equipment on remotely operated vehicles (ROV) to obtain quantitative data on the distribution and abundance of oceanic animals. High-quality video data supplants the traditional approach of assessing the kinds and numbers of animals in the oceanic water column through towing collection nets behind ships. Tow nets are limited in spatial resolution, and often destroy abundant gelatinous animals resulting in species undersampling. Video camera-based quantitative video transects (QVT) are taken through the ocean midwater, from 50m to 4000m, and provide high-resolution data at the scale of the individual animals and their natural aggregation patterns. However, the current manual method of analyzing QVT video by trained scientists is labor intensive and poses a serious limitation to the amount of information that can be analyzed from ROV dives. Presented here is an automated system for detecting marine animals (events) visible in the videos. Automated detection is difficult due to the low contrast of many translucent animals and due to debris ("marine snow") cluttering the scene. Video frames are processed with an artificial intelligence attention selection algorithm that has proven a robust means of target detection in a variety of natural terrestrial scenes. The candidate locations identified by the attention selection module are tracked across video frames using linear Kalman filters. Typically, the occurrence of visible animals in the video footage is sparse in space and time. A notion of "boring" video frames is developed by detecting whether or not there is an interesting candidate object for an animal present in a particular sequence of underwater video -- video frames that do not contain any "interesting" events. If objects can be tracked successfully over several frames, they are stored as potentially "interesting" events. Based on low-level properties, interesting events are

  4. Implications of Shared Interactive Displays for Work at a Surgery Ward: Coordination, Articulation Work and Context-awareness

    DEFF Research Database (Denmark)

    Bossen, Claus; Jensen, Lis Witte Kjær

    2008-01-01

    We report on experiences gained from the use at a surgery ward of shared interactive displays to support coordination and communication.  The displays merge large displays, video feed, RFID tag, chat and mobile phones to facilitate better coordination and articulation of work tasks and enhance...

  5. A Novel System for Supporting Autism Diagnosis Using Home Videos: Iterative Development and Evaluation of System Design.

    Science.gov (United States)

    Nazneen, Nazneen; Rozga, Agata; Smith, Christopher J; Oberleitner, Ron; Abowd, Gregory D; Arriaga, Rosa I

    2015-06-17

    Observing behavior in the natural environment is valuable to obtain an accurate and comprehensive assessment of a child's behavior, but in practice it is limited to in-clinic observation. Research shows significant time lag between when parents first become concerned and when the child is finally diagnosed with autism. This lag can delay early interventions that have been shown to improve developmental outcomes. To develop and evaluate the design of an asynchronous system that allows parents to easily collect clinically valid in-home videos of their child's behavior and supports diagnosticians in completing diagnostic assessment of autism. First, interviews were conducted with 11 clinicians and 6 families to solicit feedback from stakeholders about the system concept. Next, the system was iteratively designed, informed by experiences of families using it in a controlled home-like experimental setting and a participatory design process involving domain experts. Finally, in-field evaluation of the system design was conducted with 5 families of children (4 with previous autism diagnosis and 1 child typically developing) and 3 diagnosticians. For each family, 2 diagnosticians, blind to the child's previous diagnostic status, independently completed an autism diagnosis via our system. We compared the outcome of the assessment between the 2 diagnosticians, and between each diagnostician and the child's previous diagnostic status. The system that resulted through the iterative design process includes (1) NODA smartCapture, a mobile phone-based application for parents to record prescribed video evidence at home; and (2) NODA Connect, a Web portal for diagnosticians to direct in-home video collection, access developmental history, and conduct an assessment by linking evidence of behaviors tagged in the videos to the Diagnostic and Statistical Manual of Mental Disorders criteria. Applying clinical judgment, the diagnostician concludes a diagnostic outcome. During field

  6. Rapid prototyping of an automated video surveillance system: a hardware-software co-design approach

    Science.gov (United States)

    Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.

    2011-06-01

    FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and verified video and interface functions from a standard video framework are utilized to significantly reduce development and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a Nios-II processor using Altera's Avalon Memory Mapped protocol.

  7. Optical Head-Mounted Computer Display for Education, Research, and Documentation in Hand Surgery.

    Science.gov (United States)

    Funk, Shawn; Lee, Donald H

    2016-01-01

    Intraoperative photography and capturing videos is important for the hand surgeon. Recently, optical head-mounted computer display has been introduced as a means of capturing photographs and videos. In this article, we discuss this new technology and review its potential use in hand surgery. Copyright © 2016 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  8. Acceptance/operational test procedure 241-AN-107 Video Camera System

    International Nuclear Information System (INIS)

    Pedersen, L.T.

    1994-01-01

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer's specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights

  9. Expert Behavior in Children's Video Game Play.

    Science.gov (United States)

    VanDeventer, Stephanie S.; White, James A.

    2002-01-01

    Investigates the display of expert behavior by seven outstanding video game-playing children ages 10 and 11. Analyzes observation and debriefing transcripts for evidence of self-monitoring, pattern recognition, principled decision making, qualitative thinking, and superior memory, and discusses implications for educators regarding the development…

  10. Gamma camera display system

    International Nuclear Information System (INIS)

    Stout, K.J.

    1976-01-01

    A gamma camera having an array of photomultipliers coupled via pulse shaping circuitry and a resistor weighting circuit to a display for forming an image of a radioactive subject is described. A linearizing circuit is coupled to the weighting circuit, the linearizing circuit including a nonlinear feedback circuit with diode coupling to the weighting circuit for linearizing the correspondence between points of the display and points of the subject. 4 Claims, 5 Drawing Figures

  11. Techniques for optimizing human-machine information transfer related to real-time interactive display systems

    Science.gov (United States)

    Granaas, Michael M.; Rhea, Donald C.

    1989-01-01

    In recent years the needs of ground-based researcher-analysts to access real-time engineering data in the form of processed information has expanded rapidly. Fortunately, the capacity to deliver that information has also expanded. The development of advanced display systems is essential to the success of a research test activity. Those developed at the National Aeronautics and Space Administration (NASA), Western Aeronautical Test Range (WATR), range from simple alphanumerics to interactive mapping and graphics. These unique display systems are designed not only to meet basic information display requirements of the user, but also to take advantage of techniques for optimizing information display. Future ground-based display systems will rely heavily not only on new technologies, but also on interaction with the human user and the associated productivity with that interaction. The psychological abilities and limitations of the user will become even more important in defining the difference between a usable and a useful display system. This paper reviews the requirements for development of real-time displays; the psychological aspects of design such as the layout, color selection, real-time response rate, and interactivity of displays; and an analysis of some existing WATR displays.

  12. Stereoscopic 3D video games and their effects on engagement

    Science.gov (United States)

    Hogue, Andrew; Kapralos, Bill; Zerebecki, Chris; Tawadrous, Mina; Stanfield, Brodie; Hogue, Urszula

    2012-03-01

    With television manufacturers developing low-cost stereoscopic 3D displays, a large number of consumers will undoubtedly have access to 3D-capable televisions at home. The availability of 3D technology places the onus on content creators to develop interesting and engaging content. While the technology of stereoscopic displays and content generation are well understood, there are many questions yet to be answered surrounding its effects on the viewer. Effects of stereoscopic display on passive viewers for film are known, however video games are fundamentally different since the viewer/player is actively (rather than passively) engaged in the content. Questions of how stereoscopic viewing affects interaction mechanics have previously been studied in the context of player performance but very few have attempted to quantify the player experience to determine whether stereoscopic 3D has a positive or negative influence on their overall engagement. In this paper we present a preliminary study of the effects stereoscopic 3D have on player engagement in video games. Participants played a video game in two conditions, traditional 2D and stereoscopic 3D and their engagement was quantified using a previously validated self-reporting tool. The results suggest that S3D has a positive effect on immersion, presence, flow, and absorption.

  13. A highly sensitive underwater video system for use in turbid aquaculture ponds.

    Science.gov (United States)

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C

    2016-08-24

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds' benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system's high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  14. High-resolution laser-projection display system using a grating electromechanical system (GEMS)

    Science.gov (United States)

    Brazas, John C.; Kowarz, Marek W.

    2004-01-01

    Eastman Kodak Company has developed a diffractive-MEMS spatial-light modulator for use in printing and display applications, the grating electromechanical system (GEMS). This modulator contains a linear array of pixels capable of high-speed digital operation, high optical contrast, and good efficiency. The device operation is based on deflection of electromechanical ribbons suspended above a silicon substrate by a series of intermediate supports. When electrostatically actuated, the ribbons conform to the supporting substructure to produce a surface-relief phase grating over a wide active region. The device is designed to be binary, switching between a reflective mirror state having suspended ribbons and a diffractive grating state having ribbons in contact with substrate features. Switching times of less than 50 nanoseconds with sub-nanosecond jitter are made possible by reliable contact-mode operation. The GEMS device can be used as a high-speed digital-optical modulator for a laser-projection display system by collecting the diffracted orders and taking advantage of the low jitter. A color channel is created using a linear array of individually addressable GEMS pixels. A two-dimensional image is produced by sweeping the line image of the array, created by the projection optics, across the display screen. Gray levels in the image are formed using pulse-width modulation (PWM). A high-resolution projection display was developed using three 1080-pixel devices illuminated by red, green, and blue laser-color primaries. The result is an HDTV-format display capable of producing stunning still and motion images with very wide color gamut.

  15. The Aesthetics of the Ambient Video Experience

    Directory of Open Access Journals (Sweden)

    Jim Bizzocchi

    2008-01-01

    Full Text Available Ambient Video is an emergent cultural phenomenon, with roots that go deeply into the history of experimental film and video art. Ambient Video, like Brian Eno's ambient music, is video that "must be as easy to ignore as notice" [9]. This minimalist description conceals the formidable aesthetic challenge that faces this new form. Ambient video art works will hang on the walls of our living rooms, corporate offices, and public spaces. They will play in the background of our lives, living video paintings framed by the new generation of elegant, high-resolution flat-panel display units. However, they cannot command attention like a film or television show. They will patiently play in the background of our lives, yet they must always be ready to justify our attention in any given moment. In this capacity, ambient video works need to be equally proficient at rewarding a fleeting glance, a more direct look, or a longer contemplative gaze. This paper connects a series of threads that collectively illuminate the aesthetics of this emergent form: its history as a popular culture phenomenon, its more substantive artistic roots in avant-garde cinema and video art, its relationship to new technologies, the analysis of the viewer's conditions of reception, and the work of current artists who practice within this form.

  16. Web-based video monitoring of CT and MRI procedures

    Science.gov (United States)

    Ratib, Osman M.; Dahlbom, Magdalena; Kho, Hwa T.; Valentino, Daniel J.; McCoy, J. Michael

    2000-05-01

    A web-based video transmission of images from CT and MRI consoles was implemented in an Intranet environment for real- time monitoring of ongoing procedures. Images captured from the consoles are compressed to video resolution and broadcasted through a web server. When called upon, the attending radiologists can view these live images on any computer within the secured Intranet network. With adequate compression, these images can be displayed simultaneously in different locations at a rate of 2 to 5 images/sec through standard LAN. The quality of the images being insufficient for diagnostic purposes, our users survey showed that they were suitable for supervising a procedure, positioning the imaging slices and for routine quality checking before completion of a study. The system was implemented at UCLA to monitor 9 CTs and 6 MRIs distributed in 4 buildings. This system significantly improved the radiologists productivity by saving precious time spent in trips between reading rooms and examination rooms. It also improved patient throughput by reducing the waiting time for the radiologists to come to check a study before moving the patient from the scanner.

  17. When less is best: female brown-headed cowbirds prefer less intense male displays.

    Science.gov (United States)

    O'Loghlen, Adrian L; Rothstein, Stephen I

    2012-01-01

    Sexual selection theory predicts that females should prefer males with the most intense courtship displays. However, wing-spread song displays that male brown-headed cowbirds (Molothrus ater) direct at females are generally less intense than versions of this display that are directed at other males. Because male-directed displays are used in aggressive signaling, we hypothesized that females should prefer lower intensity performances of this display. To test this hypothesis, we played audiovisual recordings showing the same males performing both high intensity male-directed and low intensity female-directed displays to females (N = 8) and recorded the females' copulation solicitation display (CSD) responses. All eight females responded strongly to both categories of playbacks but were more sexually stimulated by the low intensity female-directed displays. Because each pair of high and low intensity playback videos had the exact same audio track, the divergent responses of females must have been based on differences in the visual content of the displays shown in the videos. Preferences female cowbirds show in acoustic CSD studies are correlated with mate choice in field and captivity studies and this is also likely to be true for preferences elucidated by playback of audiovisual displays. Female preferences for low intensity female-directed displays may explain why male cowbirds rarely use high intensity displays when signaling to females. Repetitive high intensity displays may demonstrate a male's current condition and explain why these displays are used in male-male interactions which can escalate into physical fights in which males in poorer condition could be injured or killed. This is the first study in songbirds to use audiovisual playbacks to assess how female sexual behavior varies in response to variation in a male visual display.

  18. Three-dimensional simulation and auto-stereoscopic 3D display of the battlefield environment based on the particle system algorithm

    Science.gov (United States)

    Ning, Jiwei; Sang, Xinzhu; Xing, Shujun; Cui, Huilong; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    The army's combat training is very important now, and the simulation of the real battlefield environment is of great significance. Two-dimensional information has been unable to meet the demand at present. With the development of virtual reality technology, three-dimensional (3D) simulation of the battlefield environment is possible. In the simulation of 3D battlefield environment, in addition to the terrain, combat personnel and the combat tool ,the simulation of explosions, fire, smoke and other effects is also very important, since these effects can enhance senses of realism and immersion of the 3D scene. However, these special effects are irregular objects, which make it difficult to simulate with the general geometry. Therefore, the simulation of irregular objects is always a hot and difficult research topic in computer graphics. Here, the particle system algorithm is used for simulating irregular objects. We design the simulation of the explosion, fire, smoke based on the particle system and applied it to the battlefield 3D scene. Besides, the battlefield 3D scene simulation with the glasses-free 3D display is carried out with an algorithm based on GPU 4K super-multiview 3D video real-time transformation method. At the same time, with the human-computer interaction function, we ultimately realized glasses-free 3D display of the simulated more realistic and immersed 3D battlefield environment.

  19. A Practical Strategy for Teaching a Child with Autism to Attend to and Imitate a Portable Video Model

    Science.gov (United States)

    Plavnick, Joshua B.

    2012-01-01

    Video modeling is an effective and efficient methodology for teaching new skills to individuals with autism. New technology may enhance video modeling as smartphones or tablet computers allow for portable video displays. However, the reduced screen size may decrease the likelihood of attending to the video model for some children. The present…

  20. Markerless client-server augmented reality system with natural features

    Science.gov (United States)

    Ning, Shuangning; Sang, Xinzhu; Chen, Duo

    2017-10-01

    A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.

  1. Evaluation of the educational value of YouTube videos about physical examination of the cardiovascular and respiratory systems.

    Science.gov (United States)

    Azer, Samy A; Algrain, Hala A; AlKhelaif, Rana A; AlEshaiwi, Sarah M

    2013-11-13

    A number of studies have evaluated the educational contents of videos on YouTube. However, little analysis has been done on videos about physical examination. This study aimed to analyze YouTube videos about physical examination of the cardiovascular and respiratory systems. It was hypothesized that the educational standards of videos on YouTube would vary significantly. During the period from November 2, 2011 to December 2, 2011, YouTube was searched by three assessors for videos covering the clinical examination of the cardiovascular and respiratory systems. For each video, the following information was collected: title, authors, duration, number of viewers, and total number of days on YouTube. Using criteria comprising content, technical authority, and pedagogy parameters, videos were rated independently by three assessors and grouped into educationally useful and non-useful videos. A total of 1920 videos were screened. Only relevant videos covering the examination of adults in the English language were identified (n=56). Of these, 20 were found to be relevant to cardiovascular examinations and 36 to respiratory examinations. Further analysis revealed that 9 provided useful information on cardiovascular examinations and 7 on respiratory examinations: scoring mean 14.9 (SD 0.33) and mean 15.0 (SD 0.00), respectively. The other videos, 11 covering cardiovascular and 29 on respiratory examinations, were not useful educationally, scoring mean 11.1 (SD 1.08) and mean 11.2 (SD 1.29), respectively. The differences between these two categories were significant (P.86. A small number of videos about physical examination of the cardiovascular and respiratory systems were identified as educationally useful; these videos can be used by medical students for independent learning and by clinical teachers as learning resources. The scoring system utilized by this study is simple, easy to apply, and could be used by other researchers on similar topics.

  2. A Framework for Realistic Modeling and Display of Object Surface Appearance

    Science.gov (United States)

    Darling, Benjamin A.

    With advances in screen and video hardware technology, the type of content presented on computers has progressed from text and simple shapes to high-resolution photographs, photorealistic renderings, and high-definition video. At the same time, there have been significant advances in the area of content capture, with the development of devices and methods for creating rich digital representations of real-world objects. Unlike photo or video capture, which provide a fixed record of the light in a scene, these new technologies provide information on the underlying properties of the objects, allowing their appearance to be simulated for novel lighting and viewing conditions. These capabilities provide an opportunity to continue the computer display progression, from high-fidelity image presentations to digital surrogates that recreate the experience of directly viewing objects in the real world. In this dissertation, a framework was developed for representing objects with complex color, gloss, and texture properties and displaying them onscreen to appear as if they are part of the real-world environment. At its core, there is a conceptual shift from a traditional image-based display workflow to an object-based one. Instead of presenting the stored patterns of light from a scene, the objective is to reproduce the appearance attributes of a stored object by simulating its dynamic patterns of light for the real viewing and lighting geometry. This is accomplished using a computational approach where the physical light sources are modeled and the observer and display screen are actively tracked. Surface colors are calculated for the real spectral composition of the illumination with a custom multispectral rendering pipeline. In a set of experiments, the accuracy of color and gloss reproduction was evaluated by measuring the screen directly with a spectroradiometer. Gloss reproduction was assessed by comparing gonio measurements of the screen output to measurements of the

  3. Candid camera : video surveillance system can help protect assets

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2009-11-15

    By combining closed-circuit cameras with sophisticated video analytics to create video sensors for use in remote areas, Calgary-based IntelliView Technologies Inc.'s explosion-proof video surveillance system can help the oil and gas sector monitor its assets. This article discussed the benefits, features, and applications of IntelliView's technology. Some of the benefits include a reduced need for on-site security and operating personnel and its patented analytics product known as the SmrtDVR, where the camera's images are stored. The technology can be used in temperatures as cold as minus 50 degrees Celsius and as high as 50 degrees Celsius. The product was commercialized in 2006 when it was used by Nexen Inc. It was concluded that false alarms set off by natural occurrences such as rain, snow, glare and shadows were a huge problem with analytics in the past, but that problem has been solved by IntelliView, which has its own source code, and re-programmed code. 1 fig.

  4. A colour video enhancement terminal for computerised tomography

    International Nuclear Information System (INIS)

    Webb, J.A.C.; Bell, T.K.

    1981-01-01

    An alternative colour system has been developed for the EMI scanner incorporating a sixteen colour table with a selection of scale manipulation facilities. Features such as colour deletion and colour of interest pulsation are included and the output is available both in RGB (red, green, blue) form and PAL (phase alteration line by line) coded composite video form (625 line interlaced) to facilitate the use of a domestic television receiver. A digital processing unit, implemented in SSI (small scale integration) and MSI (medium scale integration) logic, is interfaced to the independent viewing centre frame buffer memory. The unit is housed in a 19 inch cabinet on five standard Eurocards with three modular power supplies. The front panel provides a selection of switch options effecting instantaneous changes in the display. Digital information is processed in real time so that normal window and level variations are tracked by the colour display. The cost of the complete system was about Pound1800 and of this, Pound1000 was absorbed in the selection of a high quality RGB monitor (Sony PVM1300E). (author)

  5. Video monitoring system for enriched uranium casting furnaces

    International Nuclear Information System (INIS)

    Turner, P.C.

    1978-03-01

    A closed-circuit television (CCTV) system was developed to upgrade the remote-viewing capability on two oralloy (highly enriched uranium) casting furnaces in the Y-12 Plant. A silicon vidicon CCTV camera with a remotely controlled lens and infrared filtering was provided to yield a good-quality video presentation of the furnace crucible as the oralloy material is heated from 25 to 1300 0 C. Existing tube-type CCTV monochrome monitors were replaced with solid-state monitors to increase the system reliability

  6. Video-Based Big Data Analytics in Cyberlearning

    Science.gov (United States)

    Wang, Shuangbao; Kelly, William

    2017-01-01

    In this paper, we present a novel system, inVideo, for video data analytics, and its use in transforming linear videos into interactive learning objects. InVideo is able to analyze video content automatically without the need for initial viewing by a human. Using a highly efficient video indexing engine we developed, the system is able to analyze…

  7. Rapid Damage Assessment. Volume II. Development and Testing of Rapid Damage Assessment System.

    Science.gov (United States)

    1981-02-01

    pixels/s Camera Line Rate 732.4 lines/s Pixels per Line 1728 video 314 blank 4 line number (binary) 2 run number (BCD) 2048 total Pixel Resolution 8 bits...sists of an LSI-ll microprocessor, a VDI -200 video display processor, an FD-2 dual floppy diskette subsystem, an FT-I function key-trackball module...COMPONENT LIST FOR IMAGE PROCESSOR SYSTEM IMAGE PROCESSOR SYSTEM VIEWS I VDI -200 Display Processor Racks, Table FD-2 Dual Floppy Diskette Subsystem FT-l

  8. Human factors considerations for the use of color in display systems

    Science.gov (United States)

    Demars, S. A.

    1975-01-01

    Identified and assessed are those human factor considerations impacting an operator's ability to perform when information is displayed in color as contrasted to monochrome (black and white only). The findings provide valuable guidelines for the assessment of the advantages (and disadvantages) of using a color display system. The use of color provides an additional sensory channel (color perception) which is not available with black and white. The degree to which one can exploit the use of this channel is highly dependent on available display technology, mission information display requirements, and acceptable operational modes.

  9. Advanced real-time multi-display educational system (ARMES): An innovative real-time audiovisual mentoring tool for complex robotic surgery.

    Science.gov (United States)

    Lee, Joong Ho; Tanaka, Eiji; Woo, Yanghee; Ali, Güner; Son, Taeil; Kim, Hyoung-Il; Hyung, Woo Jin

    2017-12-01

    The recent scientific and technologic advances have profoundly affected the training of surgeons worldwide. We describe a novel intraoperative real-time training module, the Advanced Robotic Multi-display Educational System (ARMES). We created a real-time training module, which can provide a standardized step by step guidance to robotic distal subtotal gastrectomy with D2 lymphadenectomy procedures, ARMES. The short video clips of 20 key steps in the standardized procedure for robotic gastrectomy were created and integrated with TilePro™ software to delivery on da Vinci Surgical Systems (Intuitive Surgical, Sunnyvale, CA). We successfully performed the robotic distal subtotal gastrectomy with D2 lymphadenectomy for patient with gastric cancer employing this new teaching method without any transfer errors or system failures. Using this technique, the total operative time was 197 min and blood loss was 50 mL and there were no intra- or post-operative complications. Our innovative real-time mentoring module, ARMES, enables standardized, systematic guidance during surgical procedures. © 2017 Wiley Periodicals, Inc.

  10. A history and overview of the safety parameter display system concept

    International Nuclear Information System (INIS)

    Joyce, J.P.; Lapinsky, G.W.

    1983-01-01

    Inquiries into the accident at the Three Mile Island Nuclear Power Plant Unit 2, on March 28, 1979 brought to public attention the need to improve operators' capabilities to interact with the systems under their control. Recommendations ran the full gamut of human/machine interaction, from improvements in training and procedures to improvements in control and display hardware in the control room. This presentation briefly traces the history and development of a display concept that evolved in the post-TMI era, the Safety Parameter Display System or SPDS. The SPDS is intended to function as a detection aid for control room operators, providing an integrated overview of significant plant parameters. The purpose of this report is to describe the general concept of SPDS, its history, and its current regulatory status. A review of NRC guidance documents is included, as well as a discussion of NRC requirements placed on the SPDS. The presentation concludes with an outline of the NRC staff review process for safety parameter display systems and a synopsis of the results of generic SPDS reviews performed thus far

  11. Laser display system for multi-depth screen projection scenarios.

    Science.gov (United States)

    La Torre, J Pablo; Mayes, Nathan; Riza, Nabeel A

    2017-11-10

    Proposed is a laser projection display system that uses an electronically controlled variable focus lens (ECVFL) to achieve sharp and in-focus image projection over multi-distance three-dimensional (3D) conformal screens. The system also functions as an embedded distance sensor that enables 3D mapping of the multi-level screen platform before the desired laser scanned beam focused/defocused projected spot sizes are matched to the different localized screen distances on the 3D screen. Compared to conventional laser scanning and spatial light modulator (SLM) based projection systems, the proposed design offers in-focus non-distorted projection over a multi-distance screen zone with varying depths. An experimental projection system for a screen depth variation of 65 cm is demonstrated using a 633 nm laser beam, 3 KHz scan speed galvo-scanning mirrors, and a liquid-based ECVFL. As a basic demonstration, an in-house developed MATLAB based graphic user interface is deployed to work along with the laser projection display, enabling user inputs like text strings or predefined image projection. The user can specify projection screen distance, scanned laser linewidth, projected text font size, projected image dimensions, and laser scanning rate. Projected images are shown highlighting the 3D control capabilities of the display, including the production of a non-distorted image onto two-depths versus a distorted image via dominant prior-art projection methods.

  12. Video dosimetry: evaluation of X-radiation dose by video fluoroscopic image

    International Nuclear Information System (INIS)

    Nova, Joao Luiz Leocadio da; Lopes, Ricardo Tadeu

    1996-01-01

    A new methodology to evaluate the entrance surface dose on patients under radiodiagnosis is presented. A phantom is used in video fluoroscopic procedures in on line video signal system. The images are obtained from a Siemens Polymat 50 and are digitalized. The results show that the entrance surface dose can be obtained in real time from video imaging

  13. A portable wireless power transmission system for video capsule endoscopes.

    Science.gov (United States)

    Shi, Yu; Yan, Guozheng; Zhu, Bingquan; Liu, Gang

    2015-01-01

    Wireless power transmission (WPT) technology can solve the energy shortage problem of the video capsule endoscope (VCE) powered by button batteries, but the fixed platform limited its clinical application. This paper presents a portable WPT system for VCE. Besides portability, power transfer efficiency and stability are considered as the main indexes of optimization design of the system, which consists of the transmitting coil structure, portable control box, operating frequency, magnetic core and winding of receiving coil. Upon the above principles, the correlation parameters are measured, compared and chosen. Finally, through experiments on the platform, the methods are tested and evaluated. In the gastrointestinal tract of small pig, the VCE is supplied with sufficient energy by the WPT system, and the energy conversion efficiency is 2.8%. The video obtained is clear with a resolution of 320×240 and a frame rate of 30 frames per second. The experiments verify the feasibility of design scheme, and further improvement direction is discussed.

  14. Capture and playback synchronization in video conferencing

    Science.gov (United States)

    Shae, Zon-Yin; Chang, Pao-Chi; Chen, Mon-Song

    1995-03-01

    Packet-switching based video conferencing has emerged as one of the most important multimedia applications. Lip synchronization can be disrupted in the packet network as the result of the network properties: packet delay jitters at the capture end, network delay jitters, packet loss, packet arrived out of sequence, local clock mismatch, and video playback overlay with the graphic system. The synchronization problem become more demanding as the real time and multiparty requirement of the video conferencing application. Some of the above mentioned problem can be solved in the more advanced network architecture as ATM having promised. This paper will present some of the solutions to the problems that can be useful at the end station terminals in the massively deployed packet switching network today. The playback scheme in the end station will consist of two units: compression domain buffer management unit and the pixel domain buffer management unit. The pixel domain buffer management unit is responsible for removing the annoying frame shearing effect in the display. The compression domain buffer management unit is responsible for parsing the incoming packets for identifying the complete data blocks in the compressed data stream which can be decoded independently. The compression domain buffer management unit is also responsible for concealing the effects of clock mismatch, lip synchronization, and packet loss, out of sequence, and network jitters. This scheme can also be applied to the multiparty teleconferencing environment. Some of the schemes presented in this paper have been implemented in the Multiparty Multimedia Teleconferencing (MMT) system prototype at the IBM watson research center.

  15. Advanced Transport Operating System (ATOPS) control display unit software description

    Science.gov (United States)

    Slominski, Christopher J.; Parks, Mark A.; Debure, Kelly R.; Heaphy, William J.

    1992-01-01

    The software created for the Control Display Units (CDUs), used for the Advanced Transport Operating Systems (ATOPS) project, on the Transport Systems Research Vehicle (TSRV) is described. Module descriptions are presented in a standardized format which contains module purpose, calling sequence, a detailed description, and global references. The global reference section includes subroutines, functions, and common variables referenced by a particular module. The CDUs, one for the pilot and one for the copilot, are used for flight management purposes. Operations performed with the CDU affects the aircraft's guidance, navigation, and display software.

  16. Network, system, and status software enhancements for the autonomously managed electrical power system breadboard. Volume 4: Graphical status display

    Science.gov (United States)

    Mckee, James W.

    1990-01-01

    This volume (4 of 4) contains the description, structured flow charts, prints of the graphical displays, and source code to generate the displays for the AMPS graphical status system. The function of these displays is to present to the manager of the AMPS system a graphical status display with the hot boxes that allow the manager to get more detailed status on selected portions of the AMPS system. The development of the graphical displays is divided into two processes; the creation of the screen images and storage of them in files on the computer, and the running of the status program which uses the screen images.

  17. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    Science.gov (United States)

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753

  18. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    Directory of Open Access Journals (Sweden)

    Alvaro Suarez

    2012-02-01

    Full Text Available Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  19. Architecture and protocol of a semantic system designed for video tagging with sensor data in mobile devices.

    Science.gov (United States)

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  20. Distributed coding/decoding complexity in video sensor networks.

    Science.gov (United States)

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  1. An Evaluation of Detect and Avoid (DAA) Displays for Unmanned Aircraft Systems: The Effect of Information Level and Display Location on Pilot Performance

    Science.gov (United States)

    Fern, Lisa; Rorie, R. Conrad; Pack, Jessica S.; Shively, R. Jay; Draper, Mark H.

    2015-01-01

    A consortium of government, industry and academia is currently working to establish minimum operational performance standards for Detect and Avoid (DAA) and Control and Communications (C2) systems in order to enable broader integration of Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS). One subset of these performance standards will need to address the DAA display requirements that support an acceptable level of pilot performance. From a pilot's perspective, the DAA task is the maintenance of self separation and collision avoidance from other aircraft, utilizing the available information and controls within the Ground Control Station (GCS), including the DAA display. The pilot-in-the-loop DAA task requires the pilot to carry out three major functions: 1) detect a potential threat, 2) determine an appropriate resolution maneuver, and 3) execute that resolution maneuver via the GCS control and navigation interface(s). The purpose of the present study was to examine two main questions with respect to DAA display considerations that could impact pilots' ability to maintain well clear from other aircraft. First, what is the effect of a minimum (or basic) information display compared to an advanced information display on pilot performance? Second, what is the effect of display location on UAS pilot performance? Two levels of information level (basic, advanced) were compared across two levels of display location (standalone, integrated), for a total of four displays. The authors propose an eight-stage pilot-DAA interaction timeline from which several pilot response time metrics can be extracted. These metrics were compared across the four display conditions. The results indicate that the advanced displays had faster overall response times compared to the basic displays, however, there were no significant differences between the standalone and integrated displays. Implications of the findings on understanding pilot performance on the DAA task, the

  2. System for recording and displaying two-phase flow topographies

    International Nuclear Information System (INIS)

    Cary, C.N.; Block, J.A.

    1979-01-01

    A system of hardware and software has been developed and used to record and display in various forms details of the countercurrent flow topographies occurring in a scaled Pressurized Water Reactor downcomer annulus. An array of 288 conductivity sensors was mounted in a 1/15 scale PWR annulus. At each moment in time, the state of each probe indicates the presence or absence of water in this immediate vicinity. An electronic data acquisition system records the states of all probes 108 times per second on magnetic tape; software routines retrieve the data and reconstruct visual analogs of the flow topographies. The instantaneous two-phase state of the annulus at each instant can be displayed on a hard copy plotter or on a CRT screen. By synchronizing a camera drive with the CRT display, 16mm films have been made recreating the flow process at full speed and at various slow motion rates. All data obtained are stored in computer files in numerical form and can be subjected to various types of quantitative analysis to assist in advanced code development and verification

  3. The application of autostereoscopic display in smart home system based on mobile devices

    Science.gov (United States)

    Zhang, Yongjun; Ling, Zhi

    2015-03-01

    Smart home is a system to control home devices which are more and more popular in our daily life. Mobile intelligent terminals based on smart homes have been developed, make remote controlling and monitoring possible with smartphones or tablets. On the other hand, 3D stereo display technology developed rapidly in recent years. Therefore, a iPad-based smart home system adopts autostereoscopic display as the control interface is proposed to improve the userfriendliness of using experiences. In consideration of iPad's limited hardware capabilities, we introduced a 3D image synthesizing method based on parallel processing with Graphic Processing Unit (GPU) implemented it with OpenGL ES Application Programming Interface (API) library on IOS platforms for real-time autostereoscopic displaying. Compared to the traditional smart home system, the proposed system applied autostereoscopic display into smart home system's control interface enhanced the reality, user-friendliness and visual comfort of interface.

  4. Hardware and software improvements to a low-cost horizontal parallax holographic video monitor.

    Science.gov (United States)

    Henrie, Andrew; Codling, Jesse R; Gneiting, Scott; Christensen, Justin B; Awerkamp, Parker; Burdette, Mark J; Smalley, Daniel E

    2018-01-01

    Displays capable of true holographic video have been prohibitively expensive and difficult to build. With this paper, we present a suite of modularized hardware components and software tools needed to build a HoloMonitor with basic "hacker-space" equipment, highlighting improvements that have enabled the total materials cost to fall to $820, well below that of other holographic displays. It is our hope that the current level of simplicity, development, design flexibility, and documentation will enable the lay engineer, programmer, and scientist to relatively easily replicate, modify, and build upon our designs, bringing true holographic video to the masses.

  5. Speed Biases With Real-Life Video Clips

    Directory of Open Access Journals (Sweden)

    Federica Rossi

    2018-03-01

    Full Text Available We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion, speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion to 32% (physical motion. Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate “natural” video compression techniques based on sub-threshold temporal squeezing.

  6. Change Blindness Phenomena for Virtual Reality Display Systems.

    Science.gov (United States)

    Steinicke, Frank; Bruder, Gerd; Hinrichs, Klaus; Willemsen, Pete

    2011-09-01

    In visual perception, change blindness describes the phenomenon that persons viewing a visual scene may apparently fail to detect significant changes in that scene. These phenomena have been observed in both computer-generated imagery and real-world scenes. Several studies have demonstrated that change blindness effects occur primarily during visual disruptions such as blinks or saccadic eye movements. However, until now the influence of stereoscopic vision on change blindness has not been studied thoroughly in the context of visual perception research. In this paper, we introduce change blindness techniques for stereoscopic virtual reality (VR) systems, providing the ability to substantially modify a virtual scene in a manner that is difficult for observers to perceive. We evaluate techniques for semiimmersive VR systems, i.e., a passive and active stereoscopic projection system as well as an immersive VR system, i.e., a head-mounted display, and compare the results to those of monoscopic viewing conditions. For stereoscopic viewing conditions, we found that change blindness phenomena occur with the same magnitude as in monoscopic viewing conditions. Furthermore, we have evaluated the potential of the presented techniques for allowing abrupt, and yet significant, changes of a stereoscopically displayed virtual reality environment.

  7. Impact of packet losses in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2014-05-01

    Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.

  8. Web data display system based on data segment technology of MDSplus

    International Nuclear Information System (INIS)

    Liu Rui; Zhang Ming; Wen Chuqiao; Zheng Wei; Zhuang Ge; Yu Kexun

    2014-01-01

    Long pulse operation is the main character of advanced Tokamak, so the technology of data storage and human-data interaction are vital for dealing with the large data generated in long pulse experiment. The Web data display system was designed. The system is based on the ASP. NET architecture, and it reads segmented-record data from MDSplus database by segmented-record technology and displays the data on Web page by using NI Measurement Studio control library. With the segmented-record technology, long pulse data could be divided into many small units, data segments. Users can read the certain data segments from the long pulse data according to their special needs. Also, the system develops an efficient strategy for reading segmented record data, showing the waveforms required by users accurately and quickly. The data display Web system was tested on J-TEXT Tokamak, and was proved to be reliable and efficient to achieve the initial design goal. (authors)

  9. Video control system for a drilling in furniture workpiece

    Science.gov (United States)

    Khmelev, V. L.; Satarov, R. N.; Zavyalova, K. V.

    2018-05-01

    During last 5 years, Russian industry has being starting to be a robotic, therefore scientific groups got new tasks. One of new tasks is machine vision systems, which should solve problem of automatic quality control. This type of systems has a cost of several thousand dollars each. The price is impossible for regional small business. In this article, we describe principle and algorithm of cheap video control system, which one uses web-cameras and notebook or desktop computer as a computing unit.

  10. Producing EGS4 shower displays with the Unified Graphics System

    International Nuclear Information System (INIS)

    Cowan, R.F.

    1990-01-01

    The EGS4 Code System has been coupled with the SLAC Unified Graphics System in such a manner as to provide a means for displaying showers on UGS77-supported devices. This is most easily accomplished by attaching an auxiliary subprogram package (SHOWGRAF) to existing EGS4 User Codes and making use of a graphics display or a post-processor code called EGS4PL. SHOWGRAF may be used to create shower displays directly on interactive IBM 5080 color display devices, supporting three-dimensional rotations, translations, and zoom features, and providing illustration of particle types and energies by color and/or intensity. Alternatively, SHOWGRAF may be used to record a two-dimensional projection of the shower in a device-independent graphics file. The EGS4PL post-processor may then be used to convert this file into device-dependent graphics code for any UGS77-supported device. Options exist within EGS4PL that allow for two-dimensional translations and zoom, for creating line structure to indicate particle types and energies, and for optional display of particles by type. All of this is facilitated by means of the command processor EGS4PL EXEC together with new options (5080 and PDEV) with the standard EGS4IN EXEC routine for running EGS4 interactively under VM/SP. 6 refs

  11. Advanced alarm systems: Display and processing issues

    Energy Technology Data Exchange (ETDEWEB)

    O`Hara, J.M. [Brookhaven National Lab., Upton, NY (United States); Wachtel, J.; Perensky, J. [US Nuclear Regulatory Commission, Washington, DC (United States). Office of Nuclear Regulatory Research

    1995-05-01

    This paper describes a research program sponsored by the US Nuclear Regulatory Commission to address the human factors engineering (HFE) deficiencies associated with nuclear power plant alarm systems. The overall objective of the study is to develop HFE review guidance for alarm systems. In support of this objective, human performance issues needing additional research were identified. Among the important issues were alarm processing strategies and alarm display techniques. This paper will discuss these issues and briefly describe our current research plan to address them.

  12. The everyday lives of video game developers: Experimentally understanding underlying systems/structures

    Directory of Open Access Journals (Sweden)

    Casey O'Donnell

    2009-03-01

    Full Text Available This essay examines how tensions between work and play for video game developers shape the worlds they create. The worlds of game developers, whose daily activity is linked to larger systems of experimentation and technoscientific practice, provide insights that transcend video game development work. The essay draws on ethnographic material from over 3 years of fieldwork with video game developers in the United States and India. It develops the notion of creative collaborative practice based on work in the fields of science and technology studies, game studies, and media studies. The importance of, the desire for, or the drive to understand underlying systems and structures has become fundamental to creative collaborative practice. I argue that the daily activity of game development embodies skills fundamental to creative collaborative practice and that these capabilities represent fundamental aspects of critical thought. Simultaneously, numerous interests have begun to intervene in ways that endanger these foundations of creative collaborative practice.

  13. A Miniaturized Video System for Monitoring Drosophila Behavior

    Science.gov (United States)

    Bhattacharya, Sharmila; Inan, Omer; Kovacs, Gregory; Etemadi, Mozziyar; Sanchez, Max; Marcu, Oana

    2011-01-01

    populations in terrestrial experiments, and could be especially useful in field experiments in remote locations. Two practical limitations of the system should be noted: first, only walking flies can be observed - not flying - and second, although it enables population studies, tracking individual flies within the population is not currently possible. The system used video recording and an analog circuit to extract the average light changes as a function of time. Flies were held in a 5-cm diameter Petri dish and illuminated from below by a uniform light source. A miniature, monochrome CMOS (complementary metal-oxide semiconductor) video camera imaged the flies. This camera had automatic gain control, and this did not affect system performance. The camera was positioned 5-7 cm above the Petri dish such that the imaging area was 2.25 sq cm. With this basic setup, still images and continuous video of 15 flies at one time were obtained. To reduce the required data bandwidth by several orders of magnitude, a band-pass filter (0.3-10 Hz) circuit compressed the video signal and extracted changes in image luminance over time. The raw activity signal output of this circuit was recorded on a computer and digitally processed to extract the fly movement "events" from the waveform. These events corresponded to flies entering and leaving the image and were used for extracting activity parameters such as inter-event duration. The efficacy of the system in quantifying locomotor activity was evaluated by varying environmental temperature, then measuring the activity level of the flies.

  14. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  15. The establishment of Saccharomyces boulardii surface display system using a single expression vector.

    Science.gov (United States)

    Wang, Tiantian; Sun, Hui; Zhang, Jie; Liu, Qing; Wang, Longjiang; Chen, Peipei; Wang, Fangkun; Li, Hongmei; Xiao, Yihong; Zhao, Xiaomin

    2014-03-01

    In the present study, an a-agglutinin-based Saccharomyces boulardii surface display system was successfully established using a single expression vector. Based on the two protein co-expression vector pSP-G1 built by Partow et al., a S. boulardii surface display vector-pSDSb containing all the display elements was constructed. The display results of heterologous proteins were confirmed by successfully displaying enhanced green fluorescent protein (EGFP) and chicken Eimeria tenella Microneme-2 proteins (EtMic2) on the S. boulardii cell surface. The DNA sequence of AGA1 gene from S. boulardii (SbAGA1) was determined and used as the cell wall anchor partner. This is the first time heterologous proteins have been displayed on the cell surface of S. boulardii. Because S. boulardii is probiotic and eukaryotic, its surface display system would be very valuable, particularly in the development of a live vaccine against various pathogenic organisms especially eukaryotic pathogens such as protistan parasites. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Low-cost Tools for Aerial Video Geolocation and Air Traffic Analysis for Delay Reduction Using Google Earth

    Science.gov (United States)

    Zetterlind, V.; Pledgie, S.

    2009-12-01

    Low-cost, low-latency, robust geolocation and display of aerial video is a common need for a wide range of earth observing as well as emergency response and security applications. While hardware costs for aerial video collection systems, GPS, and inertial sensors have been decreasing, software costs for geolocation algorithms and reference imagery/DTED remain expensive and highly proprietary. As part of a Federal Small Business Innovative Research project, MosaicATM and EarthNC, Inc have developed a simple geolocation system based on the Google Earth API and Google's 'built-in' DTED and reference imagery libraries. This system geolocates aerial video based on platform and camera position, attitude, and field-of-view metadata using geometric photogrammetric principles of ray-intersection with DTED. Geolocated video can be directly rectified and viewed in the Google Earth API during processing. Work is underway to extend our geolocation code to NASA World Wind for additional flexibility and a fully open-source platform. In addition to our airborne remote sensing work, MosaicATM has developed the Surface Operations Data Analysis and Adaptation (SODAA) tool, funded by NASA Ames, which supports analysis of airport surface operations to optimize aircraft movements and reduce fuel burn and delays. As part of SODAA, MosaicATM and EarthNC, Inc have developed powerful tools to display national airspace data and time-animated 3D flight tracks in Google Earth for 4D analysis. The SODAA tool can convert raw format flight track data, FAA National Flight Data (NFD), and FAA 'Adaptation' airport surface data to a spatial database representation and then to Google Earth KML. The SODAA client provides users with a simple graphical interface through which to generate queries with a wide range of predefined and custom filters, plot results, and export for playback in Google Earth in conjunction with NFD and Adaptation overlays.

  17. Eye movement analysis of reading from computer displays, eReaders and printed books.

    Science.gov (United States)

    Zambarbieri, Daniela; Carniglia, Elena

    2012-09-01

    To compare eye movements during silent reading of three eBooks and a printed book. The three different eReading tools were a desktop PC, iPad tablet and Kindle eReader. Video-oculographic technology was used for recording eye movements. In the case of reading from the computer display the recordings were made by a video camera placed below the computer screen, whereas for reading from the iPad tablet, eReader and printed book the recording system was worn by the subject and had two cameras: one for recording the movement of the eyes and the other for recording the scene in front of the subject. Data analysis provided quantitative information in terms of number of fixations, their duration, and the direction of the movement, the latter to distinguish between fixations and regressions. Mean fixation duration was different only in reading from the computer display, and was similar for the Tablet, eReader and printed book. The percentage of regressions with respect to the total amount of fixations was comparable for eReading tools and the printed book. The analysis of eye movements during reading an eBook from different eReading tools suggests that subjects' reading behaviour is similar to reading from a printed book. © 2012 The College of Optometrists.

  18. Development of an Adaptable Display and Diagnostic System for the Evaluation of Tropical Cyclone Forecasts

    Science.gov (United States)

    Kucera, P. A.; Burek, T.; Halley-Gotway, J.

    2015-12-01

    NCAR's Joint Numerical Testbed Program (JNTP) focuses on the evaluation of experimental forecasts of tropical cyclones (TCs) with the goal of developing new research tools and diagnostic evaluation methods that can be transitioned to operations. Recent activities include the development of new TC forecast verification methods and the development of an adaptable TC display and diagnostic system. The next generation display and diagnostic system is being developed to support evaluation needs of the U.S. National Hurricane Center (NHC) and broader TC research community. The new hurricane display and diagnostic capabilities allow forecasters and research scientists to more deeply examine the performance of operational and experimental models. The system is built upon modern and flexible technology that includes OpenLayers Mapping tools that are platform independent. The forecast track and intensity along with associated observed track information are stored in an efficient MySQL database. The system provides easy-to-use interactive display system, and provides diagnostic tools to examine forecast track stratified by intensity. Consensus forecasts can be computed and displayed interactively. The system is designed to display information for both real-time and for historical TC cyclones. The display configurations are easily adaptable to meet the needs of the end-user preferences. Ongoing enhancements include improving capabilities for stratification and evaluation of historical best tracks, development and implementation of additional methods to stratify and compute consensus hurricane track and intensity forecasts, and improved graphical display tools. The display is also being enhanced to incorporate gridded forecast, satellite, and sea surface temperature fields. The presentation will provide an overview of the display and diagnostic system development and demonstration of the current capabilities.

  19. Displays in scintigraphy

    International Nuclear Information System (INIS)

    Todd-Pokropek, A.E.; Pizer, S.M.

    1977-01-01

    Displays have several functions: to transmit images, to permit interaction, to quantitate features and to provide records. The main characteristics of displays used for image transmission are their resolution, dynamic range, signal-to-noise ratio and uniformity. Considerations of visual acuity suggest that the display element size should be much less than the data element size, and in current practice at least 256X256 for a gamma camera image. The dynamic range for image transmission should be such that at least 64 levels of grey (or equivalent) are displayed. Scanner displays are also considered, and in particular, the requirements of a whole-body camera are examined. A number of display systems and devices are presented including a 'new' heated object colour display system. Interaction with displays is considered, including background subtraction, contrast enhancement, position indication and region-of-interest generation. Such systems lead to methods of quantitation, which imply knowledge of the expected distributions. Methods for intercomparing displays are considered. Polaroid displays, which have for so long dominated the field, are in the process of being replaced by stored image displays, now that large cheap memories exist which give an equivalent image quality. The impact of this in nuclear medicine is yet to be seen, but a major effect will be to enable true quantitation. (author)

  20. Robust Watermarking of Video Streams

    Directory of Open Access Journals (Sweden)

    T. Polyák

    2006-01-01

    Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark. 

  1. Automated Indexing and Search of Video Data in Large Collections with inVideo

    Directory of Open Access Journals (Sweden)

    Shuangbao Paul Wang

    2017-08-01

    Full Text Available In this paper, we present a novel system, inVideo, for automatically indexing and searching videos based on the keywords spoken in the audio track and the visual content of the video frames. Using the highly efficient video indexing engine we developed, inVideo is able to analyze videos using machine learning and pattern recognition without the need for initial viewing by a human. The time-stamped commenting and tagging features refine the accuracy of search results. The cloud-based implementation makes it possible to conduct elastic search, augmented search, and data analytics. Our research shows that inVideo presents an efficient tool in processing and analyzing videos and increasing interactions in video-based online learning environment. Data from a cybersecurity program with more than 500 students show that applying inVideo to current video material, interactions between student-student and student-faculty increased significantly across 24 sections program-wide.

  2. Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration

    Science.gov (United States)

    Ajiboye, Sola O.; Birch, Philip; Chatwin, Christopher; Young, Rupert

    2015-03-01

    There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems `expose' relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT).

  3. On the development of new SPMN diurnal video systems for daylight fireball monitoring

    Science.gov (United States)

    Madiedo, J. M.; Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.

    2008-09-01

    Daylight fireball video monitoring High-sensitivity video devices are commonly used for the study of the activity of meteor streams during the night. These provide useful data for the determination, for instance, of radiant, orbital and photometric parameters ([1] to [7]). With this aim, during 2006 three automated video stations supported by Universidad de Huelva were set up in Andalusia within the framework of the SPanish Meteor Network (SPMN). These are endowed with 8-9 high sensitivity wide-field video cameras that achieve a meteor limiting magnitude of about +3. These stations have increased the coverage performed by the low-scan allsky CCD systems operated by the SPMN and, besides, achieve a time accuracy of about 0.01s for determining the appearance of meteor and fireball events. Despite of these nocturnal monitoring efforts, we realised the need of setting up stations for daylight fireball detection. Such effort was also motivated by the appearance of the two recent meteorite-dropping events of Villalbeto de la Peña [8,9] and Puerto Lápice [10]. Although the Villalbeto de la Peña event was casually videotaped, and photographed, no direct pictures or videos were obtained for the Puerto Lápice event. Consequently, in order to perform a continuous recording of daylight fireball events, we setup new automated systems based on CCD video cameras. However, the development of these video stations implies several issues with respect to nocturnal systems that must be properly solved in order to get an optimal operation. The first of these video stations, also supported by University of Huelva, has been setup in Sevilla (Andalusia) during May 2007. But, of course, fireball association is unequivocal only in those cases when two or more stations recorded the fireball, and when consequently the geocentric radiant is accurately determined. With this aim, a second diurnal video station is being setup in Andalusia in the facilities of Centro Internacional de Estudios y

  4. 77 FR 70970 - Accessible Emergency Information, and Apparatus Requirements for Emergency Information and Video...

    Science.gov (United States)

    2012-11-28

    ... and better access video programming.'' H.R. Rep. No. 111-563, 111th Cong., 2d Sess. at 19 (2010... distributors, providers, and owners of television video programming, as well as the manufacturers of devices that display such programming. DATES: Comments are due on or before December 18, 2012; reply comments...

  5. Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.

    Science.gov (United States)

    Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard

    2018-01-01

    The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.

  6. Playing a first-person shooter video game induces neuroplastic change.

    Science.gov (United States)

    Wu, Sijing; Cheng, Cho Kin; Feng, Jing; D'Angelo, Lisa; Alain, Claude; Spence, Ian

    2012-06-01

    Playing a first-person shooter (FPS) video game alters the neural processes that support spatial selective attention. Our experiment establishes a causal relationship between playing an FPS game and neuroplastic change. Twenty-five participants completed an attentional visual field task while we measured ERPs before and after playing an FPS video game for a cumulative total of 10 hr. Early visual ERPs sensitive to bottom-up attentional processes were little affected by video game playing for only 10 hr. However, participants who played the FPS video game and also showed the greatest improvement on the attentional visual field task displayed increased amplitudes in the later visual ERPs. These potentials are thought to index top-down enhancement of spatial selective attention via increased inhibition of distractors. Individual variations in learning were observed, and these differences show that not all video game players benefit equally, either behaviorally or in terms of neural change.

  7. Recent progress in OLED and flexible displays and their potential for application to aerospace and military display systems

    Science.gov (United States)

    Sarma, Kalluri

    2015-05-01

    Organic light emitting diode (OLED) display technology has advanced significantly in recent years and it is increasingly being adapted in consumer electronics products with premium performance, such as high resolution smart phones, Tablet PCs and TVs. Even flexible OLED displays are beginning to be commercialized in consumer electronic devices such as smart phones and smart watches. In addition to the advances in OLED emitters, successful development and adoption of OLED displays for premium performance applications relies on the advances in several enabling technologies including TFT backplanes, pixel drive electronics, pixel patterning technologies, encapsulation technologies and system level engineering. In this paper we will discuss the impact of the recent advances in LTPS and AOS TFTs, R, G, B and White OLED with color filter pixel architectures, and encapsulation, on the success of the OLEDs in consumer electronic devices. We will then discuss potential of these advances in addressing the requirements of OLED and flexible displays for the military and avionics applications.

  8. Video personalization for usage environment

    Science.gov (United States)

    Tseng, Belle L.; Lin, Ching-Yung; Smith, John R.

    2002-07-01

    A video personalization and summarization system is designed and implemented incorporating usage environment to dynamically generate a personalized video summary. The personalization system adopts the three-tier server-middleware-client architecture in order to select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. Our semantic metadata is provided through the use of the VideoAnnEx MPEG-7 Video Annotation Tool. When the user initiates a request for content, the client communicates the MPEG-21 usage environment description along with the user query to the middleware. The middleware is powered by the personalization engine and the content adaptation engine. Our personalization engine includes the VideoSue Summarization on Usage Environment engine that selects the optimal set of desired contents according to user preferences. Afterwards, the adaptation engine performs the required transformations and compositions of the selected contents for the specific usage environment using our VideoEd Editing and Composition Tool. Finally, two personalization and summarization systems are demonstrated for the IBM Websphere Portal Server and for the pervasive PDA devices.

  9. HDR video synthesis for vision systems in dynamic scenes

    Science.gov (United States)

    Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried

    2016-09-01

    High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.

  10. AUTOMATIC FAST VIDEO OBJECT DETECTION AND TRACKING ON VIDEO SURVEILLANCE SYSTEM

    Directory of Open Access Journals (Sweden)

    V. Arunachalam

    2012-08-01

    Full Text Available This paper describes the advance techniques for object detection and tracking in video. Most visual surveillance systems start with motion detection. Motion detection methods attempt to locate connected regions of pixels that represent the moving objects within the scene; different approaches include frame-to-frame difference, background subtraction and motion analysis. The motion detection can be achieved by Principle Component Analysis (PCA and then separate an objects from background using background subtraction. The detected object can be segmented. Segmentation consists of two schemes: one for spatial segmentation and the other for temporal segmentation. Tracking approach can be done in each frame of detected Object. Pixel label problem can be alleviated by the MAP (Maximum a Posteriori technique.

  11. Hybrid compression of video with graphics in DTV communication systems

    NARCIS (Netherlands)

    Schaar, van der M.; With, de P.H.N.

    2000-01-01

    Advanced broadcast manipulation of TV sequences and enhanced user interfaces for TV systems have resulted in an increased amount of pre- and post-editing of video sequences, where graphical information is inserted. However, in the current broadcasting chain, there are no provisions for enabling an

  12. The Use of Smart Glasses for Surgical Video Streaming.

    Science.gov (United States)

    Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu

    2017-04-01

    Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.

  13. Video game-based neuromuscular electrical stimulation system for calf muscle training: a case study.

    Science.gov (United States)

    Sayenko, D G; Masani, K; Milosevic, M; Robinson, M F; Vette, A H; McConville, K M V; Popovic, M R

    2011-03-01

    A video game-based training system was designed to integrate neuromuscular electrical stimulation (NMES) and visual feedback as a means to improve strength and endurance of the lower leg muscles, and to increase the range of motion (ROM) of the ankle joints. The system allowed the participants to perform isotonic concentric and isometric contractions in both the plantarflexors and dorsiflexors using NMES. In the proposed system, the contractions were performed against exterior resistance, and the angle of the ankle joints was used as the control input to the video game. To test the practicality of the proposed system, an individual with chronic complete spinal cord injury (SCI) participated in the study. The system provided a progressive overload for the trained muscles, which is a prerequisite for successful muscle training. The participant indicated that he enjoyed the video game-based training and that he would like to continue the treatment. The results show that the training resulted in a significant improvement of the strength and endurance of the paralyzed lower leg muscles, and in an increased ROM of the ankle joints. Video game-based training programs might be effective in motivating participants to train more frequently and adhere to otherwise tedious training protocols. It is expected that such training will not only improve the properties of their muscles but also decrease the severity and frequency of secondary complications that result from SCI. Copyright © 2010 IPEM. All rights reserved.

  14. Designing and researching of the virtual display system based on the prism elements

    Science.gov (United States)

    Vasilev, V. N.; Grimm, V. A.; Romanova, G. E.; Smirnov, S. A.; Bakholdin, A. V.; Grishina, N. Y.

    2014-05-01

    Problems of designing of systems for virtual display systems for augmented reality placed near the observers eye (so called head worn displays) with the light guide prismatic elements are considered. Systems of augmented reality is the complex consists of the image generator (most often it's the microdisplay with the illumination system if the display is not self-luminous), the objective which forms the display image practically in infinity and the combiner which organizes the light splitting so that an observer could see the information of the microdisplay and the surrounding environment as the background at the same time. This work deals with the system with the combiner based on the composite structure of the prism elements. In the work three cases of the prism combiner design are considered and also the results of the modeling with the optical design software are presented. In the model the question of the large pupil zone was analyzed and also the discontinuous character (mosaic structure) of the angular field in transmission of the information from the microdisplay to the observer's eye with the prismatic structure are discussed.

  15. Three-dimensional modeler for animated images display system

    International Nuclear Information System (INIS)

    Boubekeur, Rania

    1987-01-01

    The mv3d software allows the modeling and display of three dimensional objects in interpretative mode with animation possibility in real time. This system is intended for a graphical extension of a FORTH interpreter (implemented by CEA/IRDI/D.LETI/DEIN) in order to control a specific hardware (3.D card designed and implemented by DEIN) allowing the generation of three dimensional objects. The object description is carried out with a specific graphical language integrated in the FORTH interpreter. Objects are modeled using elementary solids called basic forms (cube, cone, cylinder...) assembled with classical geometric transformations (rotation, translation and scaling). These basic forms are approximated by plane polygonal facets further divided in triangles. Coordinates of the summits of triangles constitute the geometrical data. These are sent to the 3.D. card for processing and display. Performed processing are: geometrical transformations on display, hidden surface elimination, shading and clipping. The mv3d software is not an entire modeler but a simple, modular and extensible tool, to which other specific functions may be easily added such as: robots motion, collisions... (author) [fr

  16. Evaluation of Distance Education System for Adult Education Using 4 Video Transmissions

    OpenAIRE

    渡部, 和雄; 湯瀬, 裕昭; 渡邉, 貴之; 井口, 真彦; 藤田, 広一

    2004-01-01

    The authors have developed a distance education system for interactive education which can transmit 4 video streams between distant lecture rooms. In this paper, we describe the results of our experiments using the system for adult education. We propose some efficient ways to use the system for adult education.

  17. Operation aid system upon occurrence of abnormality and display method therefor

    International Nuclear Information System (INIS)

    Kubota, Ryuji; Ueno, Takashi.

    1995-01-01

    The present invention provides an operation aid system for a plant having a large number of systematic equipments upon occurrence of an abnormality and a method of displaying it. Namely, contents of an operation manual upon occurrence of an abnormality is displayed in the form of a flow chart divided into a judging section and an operation section depending on symptoms of plant parameters. Discrimination numbers are provided to a plurality sets of the judging sections and the operation sections respectively. With such procedures, using various measured signals of the plant as inputted data, the discrimination numbers of the judging sections in accordance with the inputted data are stored. Then a flow chart for the judging sections and the operation sections corresponding to the stored discrimination numbers are displayed. Further, an operation manual upon occurrence of abnormalities relevant to the judging sections and the operation sections in the form of writings, and previously determined drawings of relevant systems and trend graphs of the plant are also displayed with reference to the discrimination numbers described above. As a result, both of an appropriate operation manual and relevant information are displayed simultaneously for the occurrence of a plant abnormality and an operator's erroneous operation. (I.S.)

  18. An overview of recent end-to-end wireless medical video telemedicine systems using 3G.

    Science.gov (United States)

    Panayides, A; Pattichis, M S; Pattichis, C S; Schizas, C N; Spanias, A; Kyriacou, E

    2010-01-01

    Advances in video compression, network technologies, and computer technologies have contributed to the rapid growth of mobile health (m-health) systems and services. Wide deployment of such systems and services is expected in the near future, and it's foreseen that they will soon be incorporated in daily clinical practice. This study focuses in describing the basic components of an end-to-end wireless medical video telemedicine system, providing a brief overview of the recent advances in the field, while it also highlights future trends in the design of telemedicine systems that are diagnostically driven.

  19. Volumetric three-dimensional display system with rasterization hardware

    Science.gov (United States)

    Favalora, Gregg E.; Dorval, Rick K.; Hall, Deirdre M.; Giovinco, Michael; Napoli, Joshua

    2001-06-01

    An 8-color multiplanar volumetric display is being developed by Actuality Systems, Inc. It will be capable of utilizing an image volume greater than 90 million voxels, which we believe is the greatest utilizable voxel set of any volumetric display constructed to date. The display is designed to be used for molecular visualization, mechanical CAD, e-commerce, entertainment, and medical imaging. As such, it contains a new graphics processing architecture, novel high-performance line- drawing algorithms, and an API similar to a current standard. Three-dimensional imagery is created by projecting a series of 2-D bitmaps ('image slices') onto a diffuse screen that rotates at 600 rpm. Persistence of vision fuses the slices into a volume-filling 3-D image. A modified three-panel Texas Instruments projector provides slices at approximately 4 kHz, resulting in 8-color 3-D imagery comprised of roughly 200 radially-disposed slices which are updated at 20 Hz. Each slice has a resolution of 768 by 768 pixels, subtending 10 inches. An unusual off-axis projection scheme incorporating tilted rotating optics is used to maintain good focus across the projection screen. The display electronics includes a custom rasterization architecture which converts the user's 3- D geometry data into image slices, as well as 6 Gbits of DDR SDRAM graphics memory.

  20. Diagnostic image quality of video-digitized chest images

    International Nuclear Information System (INIS)

    Winter, L.H.; Butler, R.B.; Becking, W.B.; Warnars, G.A.O.; Haar Romeny, B. ter; Ottes, F.P.; Valk, J.-P.J. de

    1989-01-01

    The diagnostic accuracy obtained with the Philips picture archiving and communications subsystem was investigated by means of an observer performance study using receiver operating characteristic (ROC) analysis. The image qualities of conventional films and video digitized images were compared. The scanner had a 1024 x 1024 x 8 bit memory. The digitized images were displayed on a 60 Hz interlaced display monitor 1024 lines. Posteroanterior (AP) roetgenograms of a chest phantom with superimposed simulated interstitial pattern disease (IPD) were produced; there were 28 normal and 40 abnormal films. Normal films were produced by the chest phantom alone. Abnormal films were taken of the chest phantom with varying degrees of superimposed simulated intersitial disease (PND) for an observer performance study, because the results of a simulated interstitial pattern disease study are less likely to be influenced by perceptual capabilities. The conventional films and the video digitized images were viewed by five experienced observers during four separate sessions. Conventional films were presented on a viewing box, the digital images were displayed on the monitor described above. The presence of simulated intersitial disease was indicated on a 5-point ROC certainty scale by each observer. We analyzed the differences between ROC curves derived from correlated data statistically. The mean time required to evaluate 68 digitized images is approximately four times the mean time needed to read the convential films. The diagnostic quality of the video digitized images was significantly lower (at the 5% level) than that of the conventional films (median area under the curve (AUC) of 0.71 and 0.94, respectively). (author). 25 refs.; 2 figs.; 4 tabs

  1. Three Dimensional Spherical Display Systems and McIDAS: Tools for Science, Education and Outreach

    Science.gov (United States)

    Kohrs, R.; Mooney, M. E.

    2010-12-01

    The Space Science and Engineering Center (SSEC) and Cooperative Institute for Meteorological Satellite Studies (CIMSS) at the University of Wisconsin are now using a 3D spherical display system and their Man computer Data Access System (McIDAS)-X and McIDAS-V as outreach tools to demonstrate how scientists and forecasters utilize satellite imagery to monitor weather and climate. Our outreach program displays orbits and data coverage of geostationary and polar satellites and demonstrates how each is beneficial for the remote sensing of Earth. Global composites of visible, infrared and water vapor images illustrate how satellite instruments collect data from different bands of the electromagnetic spectrum to monitor global weather patterns 24 hours a day. Captivating animations on spherical display systems are proving to be much more intuitive than traditional 2D displays, enabling audiences to view satellites orbiting above real-time weather systems circulating the entire globe. Complimenting the 3D spherical display system are the UNIX-based McIDAS-X and Java-based McIDAS-V software packages. McIDAS is used to composite the real-time global satellite data and create other weather related derived products. Client and server techniques used by these software packages provide the opportunity to continually update the real-time content on our globe. The enhanced functionality of McIDAS-V extends our outreach program by allowing in-depth interactive 4-dimensional views of the imagery previously viewed on the 3D spherical display system. An important goal of our outreach program is the promotion of remote sensing research and technology at SSEC and CIMSS. The 3D spherical display system has quickly become a popular tool to convey societal benefits of these endeavors. Audiences of all ages instinctively relate to recent weather events which keeps them engaged in spherical display presentations. McIDAS facilitates further exploration of the science behind the weather

  2. Cupping for treating neck pain in video display terminal (VDT) users: a randomized controlled pilot trial.

    Science.gov (United States)

    Kim, Tae-Hun; Kang, Jung Won; Kim, Kun Hyung; Lee, Min Hee; Kim, Jung Eun; Kim, Joo-Hee; Lee, Seunghoon; Shin, Mi-Suk; Jung, So-Young; Kim, Ae-Ran; Park, Hyo-Ju; Hong, Kwon Eui

    2012-01-01

    This was a randomized controlled pilot trial to evaluate the effectiveness of cupping therapy for neck pain in video display terminal (VDT) workers. Forty VDT workers with moderate to severe neck pain were recruited from May, 2011 to February, 2012. Participants were randomly allocated into one of the two interventions: 6 sessions of wet and dry cupping or heating pad application. The participants were offered an exercise program to perform during the participation period. A 0 to 100 numeric rating scale (NRS) for neck pain, measure yourself medical outcome profile 2 score (MYMOP2 score), cervical spine range of motion (C-spine ROM), neck disability index (NDI), the EuroQol health index (EQ-5D), short form stress response inventory (SRI-SF) and fatigue severity scale (FSS) were assessed at several points during a 7-week period. Compared with a heating pad, cupping was more effective in improving pain (adjusted NRS difference: -1.29 [95% CI -1.61, -0.97] at 3 weeks (p=0.025) and -1.16 [-1.48, -0.84] at 7 weeks (p=0.005)), neck function (adjusted NDI difference: -0.79 [-1.11, -0.47] at 3 (p=0.0039) and 7 weeks (pcupping and 0.91 [0.86, 0.91] with heating pad treatment, p=0.0054). Four participants reported mild adverse events of cupping. Two weeks of cupping therapy and an exercise program may be effective in reducing pain and improving neck function in VDT workers.

  3. A new technique for presentation of scientific works: video in poster.

    Science.gov (United States)

    Bozdag, Ali Dogan

    2008-07-01

    Presentations at scientific congresses and symposiums can be in two different forms: poster or oral presentation. Each method has some advantages and disadvantages. To combine the advantages of oral and poster presentations, a new presentation type was conceived: "video in poster." The top of the portable digital video display (DVD) player is opened 180 degrees to keep the screen and the body of the DVD player in the same plane. The poster is attached to the DVD player and a window is made in the poster to expose the screen of the DVD player so the screen appears as a picture on the poster. Then this video in poster is fixed to the panel. When the DVD player is turned on, the video presentation of the surgical procedure starts. Several posters were presented at different medical congresses in 2007 using the "video in poster" technique, and they received poster awards. The video in poster combines the advantages of both oral and poster presentations.

  4. Optimal use of video for teaching the practical implications of studying business information systems

    DEFF Research Database (Denmark)

    Fog, Benedikte; Ulfkjær, Jacob Kanneworff Stigsen; Schlichter, Bjarne Rerup

    that video should be introduced early during a course to prevent students’ misconceptions of working with business information systems, as well as to increase motivation and comprehension within the academic area. It is also considered of importance to have a trustworthy person explaining the practical......The study of business information systems has become increasingly important in the Digital Economy. However, it has been found that students have difficulties understanding the practical implications thereof and this leads to a motivational decreases. This study aims to investigate how to optimize...... not sufficiently reflect the theoretical recommendations of using video optimally in a management education. It did not comply with the video learning sequence as introduced by Marx and Frost (1998). However, it questions if the level of cognitive orientation activities can become too extensive. It finds...

  5. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    Science.gov (United States)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  6. Engineering task plan for Tanks 241-AN-103, 104, 105 color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1994-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and installation of the video camera systems into the vapor space within tanks 241-AN-103, 104, and 105. The one camera remotely operated color video systems will be used to observe and record the activities within the vapor space. Activities may include but are not limited to core sampling, auger activities, crust layer examination, monitoring of equipment installation/removal, and any other activities. The objective of this task is to provide a single camera system in each of the tanks for the Flammable Gas Tank Safety Program

  7. SIRSALE: integrated video database management tools

    Science.gov (United States)

    Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.

    2002-07-01

    Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.

  8. Cognitive Cost of Using Augmented Reality Displays.

    Science.gov (United States)

    Baumeister, James; Ssin, Seung Youb; ElSayed, Neven A M; Dorrian, Jillian; Webb, David P; Walsh, James A; Simon, Timothy M; Irlitti, Andrew; Smith, Ross T; Kohler, Mark; Thomas, Bruce H

    2017-11-01

    This paper presents the results of two cognitive load studies comparing three augmented reality display technologies: spatial augmented reality, the optical see-through Microsoft HoloLens, and the video see-through Samsung Gear VR. In particular, the two experiments focused on isolating the cognitive load cost of receiving instructions for a button-pressing procedural task. The studies employed a self-assessment cognitive load methodology, as well as an additional dual-task cognitive load methodology. The results showed that spatial augmented reality led to increased performance and reduced cognitive load. Additionally, it was discovered that a limited field of view can introduce increased cognitive load requirements. The findings suggest that some of the inherent restrictions of head-mounted displays materialize as increased user cognitive load.

  9. Development of an emergency medical video multiplexing transport system. Aiming at the nation wide prehospital care on ambulance.

    Science.gov (United States)

    Nagatuma, Hideaki

    2003-04-01

    The Emergency Medical Video Multiplexing Transport System (EMTS) is designed to support prehospital cares by delivering high quality live video streams of patients in an ambulance to emergency doctors in a remote hospital via satellite communications. The important feature is that EMTS divides a patient's live video scene into four pieces and transports the four video streams on four separate network channels. By multiplexing four video streams, EMTS is able to transport high quality videos through low data transmission rate networks such as satellite communications and cellular phone networks. In order to transport live video streams constantly, EMTS adopts Real-time Transport Protocol/Real-time Control Protocol as a network protocol and video stream data are compressed by Moving Picture Experts Group 4 format. As EMTS combines four video streams with checking video frame numbers, it uses a refresh packet that initializes server's frame numbers to synchronize the four video streams.

  10. Learning neuroendoscopy with an exoscope system (video telescopic operating monitor): Early clinical results.

    Science.gov (United States)

    Parihar, Vijay; Yadav, Y R; Kher, Yatin; Ratre, Shailendra; Sethi, Ashish; Sharma, Dhananjaya

    2016-01-01

    Steep learning curve is found initially in pure endoscopic procedures. Video telescopic operating monitor (VITOM) is an advance in rigid-lens telescope systems provides an alternative method for learning basics of neuroendoscopy with the help of the familiar principle of microneurosurgery. The aim was to evaluate the clinical utility of VITOM as a learning tool for neuroendoscopy. Video telescopic operating monitor was used 39 cranial and spinal procedures and its utility as a tool for minimally invasive neurosurgery and neuroendoscopy for initial learning curve was studied. Video telescopic operating monitor was used in 25 cranial and 14 spinal procedures. Image quality is comparable to endoscope and microscope. Surgeons comfort improved with VITOM. Frequent repositioning of scope holder and lack of stereopsis is initial limiting factor was compensated for with repeated procedures. Video telescopic operating monitor is found useful to reduce initial learning curve of neuroendoscopy.

  11. Video Feedforward for Rapid Learning of a Picture-Based Communication System

    Science.gov (United States)

    Smith, Jemma; Hand, Linda; Dowrick, Peter W.

    2014-01-01

    This study examined the efficacy of video self modeling (VSM) using feedforward, to teach various goals of a picture exchange communication system (PECS). The participants were two boys with autism and one man with Down syndrome. All three participants were non-verbal with no current functional system of communication; the two children had long…

  12. A Fluid Membrane-Based Soluble Ligand Display System for Live CellAssays

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Jwa-Min; Nair, Pradeep N.; Neve, Richard M.; Gray, Joe W.; Groves, Jay T.

    2005-10-14

    Cell communication modulates numerous biological processes including proliferation, apoptosis, motility, invasion and differentiation. Correspondingly, there has been significant interest in the development of surface display strategies for the presentation of signaling molecules to living cells. This effort has primarily focused on naturally surface-bound ligands, such as extracellular matrix components and cell membranes. Soluble ligands (e.g. growth factors and cytokines) play an important role in intercellular communications, and their display in a surface-bound format would be of great utility in the design of array-based live cell assays. Recently, several cell microarray systems that display cDNA, RNAi, or small molecules in a surface array format were proven to be useful in accelerating high-throughput functional genetic studies and screening therapeutic agents. These surface display methods provide a flexible platform for the systematic, combinatorial investigation of genes and small molecules affecting cellular processes and phenotypes of interest. In an analogous sense, it would be an important advance if one could display soluble signaling ligands in a surface assay format that allows for systematic, patterned presentation of soluble ligands to live cells. Such a technique would make it possible to examine cellular phenotypes of interest in a parallel format with soluble signaling ligands as one of the display parameters. Herein we report a ligand-modified fluid supported lipid bilayer (SLB) assay system that can be used to functionally display soluble ligands to cells in situ (Figure 1A). By displaying soluble ligands on a SLB surface, both solution behavior (the ability to become locally enriched by reaction-diffusion processes) and solid behavior (the ability to control the spatial location of the ligands in an open system) could be combined. The method reported herein benefits from the naturally fluid state of the supported membrane, which allows

  13. Video performance for high security applications

    International Nuclear Information System (INIS)

    Connell, Jack C.; Norman, Bradley C.

    2010-01-01

    The complexity of physical protection systems has increased to address modern threats to national security and emerging commercial technologies. A key element of modern physical protection systems is the data presented to the human operator used for rapid determination of the cause of an alarm, whether false (e.g., caused by an animal, debris, etc.) or real (e.g., a human adversary). Alarm assessment, the human validation of a sensor alarm, primarily relies on imaging technologies and video systems. Developing measures of effectiveness (MOE) that drive the design or evaluation of a video system or technology becomes a challenge, given the subjectivity of the application (e.g., alarm assessment). Sandia National Laboratories has conducted empirical analysis using field test data and mathematical models such as binomial distribution and Johnson target transfer functions to develop MOEs for video system technologies. Depending on the technology, the task of the security operator and the distance to the target, the Probability of Assessment (PAs) can be determined as a function of a variety of conditions or assumptions. PAs used as an MOE allows the systems engineer to conduct trade studies, make informed design decisions, or evaluate new higher-risk technologies. This paper outlines general video system design trade-offs, discusses ways video can be used to increase system performance and lists MOEs for video systems used in subjective applications such as alarm assessment.

  14. The optical design of ultra-short throw system for panel emitted theater video system

    Science.gov (United States)

    Huang, Jiun-Woei

    2015-07-01

    In the past decade, the display format from (HD High Definition) through Full HD(1920X1080) to UHD(4kX2k), mainly guides display industry to two directions: one is liquid crystal display(LCD) from 10 inch to 100 inch and more, and the other is projector. Although LCD has been popularly used in market; however, the investment for production such kind displays cost more money expenditure, and less consideration of environmental pollution and protection[1]. The Projection system may be considered, due to more viewing access, flexible in location, energy saving and environmental protection issues. The topic is to design and fabricate a short throw factor liquid crystal on silicon (LCoS) projection system for cinema. It provides a projection lens system, including a tele-centric lens fitted for emitted LCoS to collimate light to enlarge the field angle. Then, the optical path is guided by a symmetric lens. Light of LCoS may pass through the lens, hit on and reflect through an aspherical mirror, to form a less distortion image on blank wall or screen for home cinema. The throw ratio is less than 0.33.

  15. 76 FR 55585 - Video Description: Implementation of the Twenty-First Century Communications and Video...

    Science.gov (United States)

    2011-09-08

    ... of Video Programming Report and Order (15 F.C.C.R. 15,230 (2000)), recon. granted in part and denied... dialogue, makes video programming more accessible to individuals who are blind or visually impaired. The... networks, and multichannel video programming distributor systems (``MVPDs'') with more than 50,000...

  16. Adoption Concerns for the Deployment of Interactive Public Displays at Schools

    Directory of Open Access Journals (Sweden)

    José Alberto Lencastre

    2014-12-01

    Full Text Available JuxtaLearn is a research project focused on ‘performance’ as a means of provoking students’ understanding of science and technology through the creation and sharing of educational videos. As the videos will be shared in public displays, the Portuguese research team developed three workshops with twelve teachers from a Portuguese Secondary School representing different school departments and sharing organizational responsibilities. The aim was to generate scenarios of possible features and interaction for the curricular integration of the technological device. Our findings suggest that teachers are not motivated to use, on their own, technologies in the classroom, but receptive to new and challenging technologies when properly stimulated. They were able to generate scenarios that take advantage of the possibilities offered by digital public displays to stimulate learning processes. However, there are pedagogical, organizational and ethical concerns in the management and control of content that need to be resolved before they feel confortable to deal with change and technological innovation.

  17. A digital video tracking system

    Science.gov (United States)

    Giles, M. K.

    1980-01-01

    The Real-Time Videotheodolite (RTV) was developed in connection with the requirement to replace film as a recording medium to obtain the real-time location of an object in the field-of-view (FOV) of a long focal length theodolite. Design philosophy called for a system capable of discriminatory judgment in identifying the object to be tracked with 60 independent observations per second, capable of locating the center of mass of the object projection on the image plane within about 2% of the FOV in rapidly changing background/foreground situations, and able to generate a predicted observation angle for the next observation. A description is given of a number of subsystems of the RTV, taking into account the processor configuration, the video processor, the projection processor, the tracker processor, the control processor, and the optics interface and imaging subsystem.

  18. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  19. Video Golf

    Science.gov (United States)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  20. Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system.

    Science.gov (United States)

    Xu, Hui; Tong, Xiao-Jun; Zhang, Miao; Wang, Zhu; Li, Ling-Hao

    2016-06-01

    Video encryption schemes mostly employ the selective encryption method to encrypt parts of important and sensitive video information, aiming to ensure the real-time performance and encryption efficiency. The classic block cipher is not applicable to video encryption due to the high computational overhead. In this paper, we propose the encryption selection control module to encrypt video syntax elements dynamically which is controlled by the chaotic pseudorandom sequence. A novel spatiotemporal chaos system and binarization method is used to generate a key stream for encrypting the chosen syntax elements. The proposed scheme enhances the resistance against attacks through the dynamic encryption process and high-security stream cipher. Experimental results show that the proposed method exhibits high security and high efficiency with little effect on the compression ratio and time cost.

  1. The impact of video technology on learning: A cooking skills experiment.

    Science.gov (United States)

    Surgenor, Dawn; Hollywood, Lynsey; Furey, Sinéad; Lavelle, Fiona; McGowan, Laura; Spence, Michelle; Raats, Monique; McCloat, Amanda; Mooney, Elaine; Caraher, Martin; Dean, Moira

    2017-07-01

    This study examines the role of video technology in the development of cooking skills. The study explored the views of 141 female participants on whether video technology can promote confidence in learning new cooking skills to assist in meal preparation. Prior to each focus group participants took part in a cooking experiment to assess the most effective method of learning for low-skilled cooks across four experimental conditions (recipe card only; recipe card plus video demonstration; recipe card plus video demonstration conducted in segmented stages; and recipe card plus video demonstration whereby participants freely accessed video demonstrations as and when needed). Focus group findings revealed that video technology was perceived to assist learning in the cooking process in the following ways: (1) improved comprehension of the cooking process; (2) real-time reassurance in the cooking process; (3) assisting the acquisition of new cooking skills; and (4) enhancing the enjoyment of the cooking process. These findings display the potential for video technology to promote motivation and confidence as well as enhancing cooking skills among low-skilled individuals wishing to cook from scratch using fresh ingredients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Identification and analysis of unsatisfactory psychosocial work situations: a participatory approach employing video-computer interaction.

    Science.gov (United States)

    Hanse, J J; Forsman, M

    2001-02-01

    A method for psychosocial evaluation of potentially stressful or unsatisfactory situations in manual work was developed. It focuses on subjective responses regarding specific situations and is based on interactive worker assessment when viewing video recordings of oneself. The worker is first video-recorded during work. The video is then displayed on the computer terminal, and the filmed worker clicks on virtual controls on the screen whenever an unsatisfactory psychosocial situation appears; a window of questions regarding psychological demands, mental strain and job control is then opened. A library with pictorial information and comments on the selected situations is formed in the computer. The evaluation system, called PSIDAR, was applied in two case studies, one of manual materials handling in an automotive workshop and one of a group of workers producing and testing instrument panels. The findings indicate that PSIDAR can provide data that are useful in a participatory ergonomic process of change.

  3. Measurement techniques of LC display systems

    Science.gov (United States)

    Kosmowski, Bogdan B.; Becker, Michael E.; Neumeier, Juergen

    1993-10-01

    The strong increase of applications of liquid crystal displays in various areas (measuring, medical equipment, automotive, telecommunication, office, etc.) has forced the demand for the adequate specification of the LCDs performances. The optical, electro-optical and spectral properties of LCDs are strongly dependent on viewing direction, electrical driving conditions, illumination and temperature. All these quantities have to be precisely controlled, when one of them is varied, the resulting optical response of the object is recorded. In this paper we present measuring methods proposed for LCD panels and the computer controlled measuring system (DMS) for their evaluation.

  4. Design and Implementation of Mobile Car with Wireless Video Monitoring System Based on STC89C52

    Directory of Open Access Journals (Sweden)

    Yang Hong

    2014-05-01

    Full Text Available With the rapid development of wireless networks and image acquisition technology, wireless video transmission technology has been widely applied in various communication systems. The traditional video monitoring technology is restricted by some conditions such as layout, environmental, the relatively large volume, cost, and so on. In view of this problem, this paper proposes a method that the mobile car can be equipped with wireless video monitoring system. The mobile car which has some functions such as detection, video acquisition and wireless data transmission is developed based on STC89C52 Micro Control Unit (MCU and WiFi router. Firstly, information such as image, temperature and humidity is processed by the MCU and communicated with the router, and then returned by the WiFi router to the host computer phone. Secondly, control information issued by the host computer phone is received by WiFi router and sent to the MCU, and then the MCU sends relevant instructions. Lastly, the wireless transmission of video images and the remote control of the car are realized. The results prove that the system has some features such as simple operation, high stability, fast response, low cost, strong flexibility, widely application, and so on. The system has certain practical value and popularization value.

  5. Projection displays and MEMS: timely convergence for a bright future

    Science.gov (United States)

    Hornbeck, Larry J.

    1995-09-01

    Projection displays and microelectromechanical systems (MEMS) have evolved independently, occasionally crossing paths as early as the 1950s. But the commercially viable use of MEMS for projection displays has been illusive until the recent invention of Texas Instruments Digital Light Processing TM (DLP) technology. DLP technology is based on the Digital Micromirror DeviceTM (DMD) microchip, a MEMS technology that is a semiconductor digital light switch that precisely controls a light source for projection display and hardcopy applications. DLP technology provides a unique business opportunity because of the timely convergence of market needs and technology advances. The world is rapidly moving to an all- digital communications and entertainment infrastructure. In the near future, most of the technologies necessary for this infrastrucutre will be available at the right performance and price levels. This will make commercially viable an all-digital chain (capture, compression, transmission, reception decompression, hearing, and viewing). Unfortunately, the digital images received today must be translated into analog signals for viewing on today's televisions. Digital video is the final link in the all-digital infrastructure and DLP technoogy provides that link. DLP technology is an enabler for digital, high-resolution, color projection displays that have high contrast, are bright, seamless, and have the accuracy of color and grayscale that can be achieved only by digital control. This paper contains an introduction to DMD and DLP technology, including the historical context from which to view their developemnt. The architecture, projection operation, and fabrication are presented. Finally, the paper includes an update about current DMD business opportunities in projection displays and hardcopy.

  6. A low-cost multimodal head-mounted display system for neuroendoscopic surgery.

    Science.gov (United States)

    Xu, Xinghua; Zheng, Yi; Yao, Shujing; Sun, Guochen; Xu, Bainan; Chen, Xiaolei

    2018-01-01

    With rapid advances in technology, wearable devices as head-mounted display (HMD) have been adopted for various uses in medical science, ranging from simply aiding in fitness to assisting surgery. We aimed to investigate the feasibility and practicability of a low-cost multimodal HMD system in neuroendoscopic surgery. A multimodal HMD system, mainly consisted of a HMD with two built-in displays, an action camera, and a laptop computer displaying reconstructed medical images, was developed to assist neuroendoscopic surgery. With this intensively integrated system, the neurosurgeon could freely switch between endoscopic image, three-dimensional (3D) reconstructed virtual endoscopy images, and surrounding environment images. Using a leap motion controller, the neurosurgeon could adjust or rotate the 3D virtual endoscopic images at a distance to better understand the positional relation between lesions and normal tissues at will. A total of 21 consecutive patients with ventricular system diseases underwent neuroendoscopic surgery with the aid of this system. All operations were accomplished successfully, and no system-related complications occurred. The HMD was comfortable to wear and easy to operate. Screen resolution of the HMD was high enough for the neurosurgeon to operate carefully. With the system, the neurosurgeon might get a better comprehension on lesions by freely switching among images of different modalities. The system had a steep learning curve, which meant a quick increment of skill with it. Compared with commercially available surgical assistant instruments, this system was relatively low-cost. The multimodal HMD system is feasible, practical, helpful, and relatively cost efficient in neuroendoscopic surgery.

  7. Design of the control system for full-color LED display based on MSP430 MCU

    Science.gov (United States)

    Li, Xue; Xu, Hui-juan; Qin, Ling-ling; Zheng, Long-jiang

    2013-08-01

    The LED display incorporate the micro electronic technique, computer technology and information processing as a whole, it becomes the most preponderant of a new generation of display media with the advantages of bright in color, high dynamic range, high brightness and long operating life, etc. The LED display has been widely used in the bank, securities trading, highway signs, airport and advertising, etc. According to the display color, the LED display screen is divided into monochrome screen, double color display and full color display. With the diversification of the LED display's color and the ceaseless rise of the display demands, the LED display's drive circuit and control technology also get the corresponding progress and development. The earliest monochrome screen just displaying Chinese characters, simple character or digital, so the requirements of the controller are relatively low. With the widely used of the double color LED display, the performance of its controller will also increase. In recent years, the full color LED display with three primary colors of red, green, blue and grayscale display effect has been highly attention with its rich and colorful display effect. Every true color pixel includes three son pixels of red, green, blue, using the space colour mixture to realize the multicolor. The dynamic scanning control system of LED full-color display is designed based on MSP430 microcontroller technology of the low power consumption. The gray control technology of this system used the new method of pulse width modulation (PWM) and 19 games show principle are combining. This method in meet 256 level grayscale display conditions, improves the efficiency of the LED light device, and enhances the administrative levels feels of the image. Drive circuit used 1/8 scanning constant current drive mode, and make full use of the single chip microcomputer I/O mouth resources to complete the control. The system supports text, pictures display of 256 grayscale

  8. Commercially available video motion detectors

    International Nuclear Information System (INIS)

    1979-01-01

    A market survey of commercially available video motion detection systems was conducted by the Intrusion Detection Systems Technology Division of Sandia Laboratories. The information obtained from this survey is summarized in this report. The cutoff date for this information is May 1978. A list of commercially available video motion detection systems is appended

  9. Video Surveillance: Privacy Issues and Legal Compliance

    DEFF Research Database (Denmark)

    Mahmood Rajpoot, Qasim; Jensen, Christian D.

    2015-01-01

    Pervasive usage of video surveillance is rapidly increasing in developed countries. Continuous security threats to public safety demand use of such systems. Contemporary video surveillance systems offer advanced functionalities which threaten the privacy of those recorded in the video. There is a...

  10. Interface of the transport systems research vehicle monochrome display system to the digital autonomous terminal access communication data bus

    Science.gov (United States)

    Easley, W. C.; Tanguy, J. S.

    1986-01-01

    An upgrade of the transport systems research vehicle (TSRV) experimental flight system retained the original monochrome display system. The original host computer was replaced with a Norden 11/70, a new digital autonomous terminal access communication (DATAC) data bus was installed for data transfer between display system and host, while a new data interface method was required. The new display data interface uses four split phase bipolar (SPBP) serial busses. The DATAC bus uses a shared interface ram (SIR) for intermediate storage of its data transfer. A display interface unit (DIU) was designed and configured to read from and write to the SIR to properly convert the data from parallel to SPBP serial and vice versa. It is found that separation of data for use by each SPBP bus and synchronization of data tranfer throughout the entire experimental flight system are major problems which require solution in DIU design. The techniques used to accomplish these new data interface requirements are described.

  11. A laboratory evaluation of color video monitors

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video monitors used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color video technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories established a program to evaluate the newest relevant color video equipment. This report documents the evaluation of an integral component, color monitors. It briefly discusses a critical parameter, dynamic range, details test procedures, and evaluates the results.

  12. A laboratory evaluation of color video monitors

    International Nuclear Information System (INIS)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video monitors used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color video technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories established a program to evaluate the newest relevant color video equipment. This report documents the evaluation of an integral component, color monitors. It briefly discusses a critical parameter, dynamic range, details test procedures, and evaluates the results

  13. Video-tracker trajectory analysis: who meets whom, when and where

    Science.gov (United States)

    Jäger, U.; Willersinn, D.

    2010-04-01

    Unveiling unusual or hostile events by observing manifold moving persons in a crowd is a challenging task for human operators, especially when sitting in front of monitor walls for hours. Typically, hostile events are rare. Thus, due to tiredness and negligence the operator may miss important events. In such situations, an automatic alarming system is able to support the human operator. The system incorporates a processing chain consisting of (1) people tracking, (2) event detection, (3) data retrieval, and (4) display of relevant video sequence overlaid by highlighted regions of interest. In this paper we focus on the event detection stage of the processing chain mentioned above. In our case, the selected event of interest is the encounter of people. Although being based on a rather simple trajectory analysis, this kind of event embodies great practical importance because it paves the way to answer the question "who meets whom, when and where". This, in turn, forms the basis to detect potential situations where e.g. money, weapons, drugs etc. are handed over from one person to another in crowded environments like railway stations, airports or busy streets and places etc.. The input to the trajectory analysis comes from a multi-object video-based tracking system developed at IOSB which is able to track multiple individuals within a crowd in real-time [1]. From this we calculate the inter-distances between all persons on a frame-to-frame basis. We use a sequence of simple rules based on the individuals' kinematics to detect the event mentioned above to output the frame number, the persons' IDs from the tracker and the pixel coordinates of the meeting position. Using this information, a data retrieval system may extract the corresponding part of the recorded video image sequence and finally allows for replaying the selected video clip with a highlighted region of interest to attract the operator's attention for further visual inspection.

  14. Development of Information Display System for Operator Support in Severe Accident

    International Nuclear Information System (INIS)

    Jeong, Kwang Il; Lee, Joon Ku

    2016-01-01

    When the severe accident occurs, the technical support center (TSC) performs the mitigation strategy with severe accident management guidelines (SAMG) and communicates with main control room (MCR) operators to obtain information of plant's status. In such circumstances, the importance of an information display for severe accident is increased. Therefore an information display system dedicated to severe accident conditions is required to secure the plant information, to provide the necessary information to MCR operators and TSC operators, and to support the decision using these information. We setup the design concept of severe accident information display system (SIDS) in the previous study and defined its requirements of function and performance. This paper describes the process, results of the identification of the severe accident information for MCR operator and the implementation of SIDS. Further implementation on post-accident monitoring function and data validation function for severe accidents will be accomplished in the future

  15. Microprocessor based beam intensity and efficiency display system for the Fermilab accelerator

    International Nuclear Information System (INIS)

    Biwer, R.

    1979-01-01

    The Main Accelerator display system for the Fermilab accelerator gathers charge data and displays it including processed transfer efficiencies of each of the accelerators. To accomplish this, strategically located charge converters monitor the circulating internal beam of each of the Fermilab accelerators. Their outputs are processed via an asynchronously triggered, multiplexed analog-to-digital converter. The data is converted into a digital byte containing address code and data, then stores it into two 16-bit memories. One memory outputs the interleaved data as a data pulse train while the other interfaces directly to a local host computer for further analysis. The microprocessor based display unit synchronizes displayed data during normal operation as well as special storage modes. The display unit outputs data to the fron panel in the form of a numeric value and also makes digital-to-analog conversions of displayed data for external peripheral devices. 5 refs

  16. An application of the process computer and CRT display system in BWR nuclear power station

    International Nuclear Information System (INIS)

    Goto, Seiichiro; Aoki, Retsu; Kawahara, Haruo; Sato, Takahisa

    1975-01-01

    A color CRT display system was combined with a process computer in some BWR nuclear power plants in Japan. Although the present control system uses the CRT display system only as an output device of the process computer, it has various advantages over conventional control panel as an efficient plant-operator interface. Various graphic displays are classified into four categories. The first is operational guide which includes the display of control rod worth minimizer and that of rod block monitor. The second is the display of the results of core performance calculation which include axial and radial distributions of power output, exit quality, channel flow rate, CHFR (critical heat flux ratio), FLPD (fraction of linear power density), etc. The third is the display of process variables and corresponding computational values. The readings of LPRM, control rod position and the process data concerning turbines and feed water system are included in this category. The fourth category includes the differential axial power distribution between base power distribution (obtained from TIP) and the reading of each LPRM detector, and the display of various input parameters being used by the process computer. Many photographs are presented to show examples of those applications. (Aoki, K.)

  17. Development and application of traffic flow information collecting and analysis system based on multi-type video

    Science.gov (United States)

    Lu, Mujie; Shang, Wenjie; Ji, Xinkai; Hua, Mingzhuang; Cheng, Kuo

    2015-12-01

    Nowadays, intelligent transportation system (ITS) has already become the new direction of transportation development. Traffic data, as a fundamental part of intelligent transportation system, is having a more and more crucial status. In recent years, video observation technology has been widely used in the field of traffic information collecting. Traffic flow information contained in video data has many advantages which is comprehensive and can be stored for a long time, but there are still many problems, such as low precision and high cost in the process of collecting information. This paper aiming at these problems, proposes a kind of traffic target detection method with broad applicability. Based on three different ways of getting video data, such as aerial photography, fixed camera and handheld camera, we develop a kind of intelligent analysis software which can be used to extract the macroscopic, microscopic traffic flow information in the video, and the information can be used for traffic analysis and transportation planning. For road intersections, the system uses frame difference method to extract traffic information, for freeway sections, the system uses optical flow method to track the vehicles. The system was applied in Nanjing, Jiangsu province, and the application shows that the system for extracting different types of traffic flow information has a high accuracy, it can meet the needs of traffic engineering observations and has a good application prospect.

  18. Image processing and computer controls for video profile diagnostic system in the ground test accelerator (GTA)

    International Nuclear Information System (INIS)

    Wright, R.; Zander, M.; Brown, S.; Sandoval, D.; Gilpatrick, D.; Gibson, H.

    1992-01-01

    This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development of both software and hardware for imagetool and its integration with the GTA control system (GTACS) is discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. (Author) (3 figs., 4 refs.)

  19. Passive ultra-brief video training improves performance of compression-only cardiopulmonary resuscitation.

    Science.gov (United States)

    Benoit, Justin L; Vogele, Jennifer; Hart, Kimberly W; Lindsell, Christopher J; McMullan, Jason T

    2017-06-01

    Bystander compression-only cardiopulmonary resuscitation (CPR) improves survival after out-of-hospital cardiac arrest. To broaden CPR training, 1-2min ultra-brief videos have been disseminated via the Internet and television. Our objective was to determine whether participants passively exposed to a televised ultra-brief video perform CPR better than unexposed controls. This before-and-after study was conducted with non-patients in an urban Emergency Department waiting room. The intervention was an ultra-brief CPR training video displayed via closed-circuit television 3-6 times/hour. Participants were unaware of the study and not told to watch the video. Pre-intervention, no video was displayed. Participants were asked to demonstrate compression-only CPR on a manikin. Performance was scored based on critical actions: check for responsiveness, call for help, begin compressions immediately, and correct hand placement, compression rate and depth. The primary outcome was the proportion of participants who performed all actions correctly. There were 50 control and 50 exposed participants. Mean age was 37, 51% were African-American, 52% were female, and 10% self-reported current CPR certification. There were no statistically significant differences in baseline characteristics between groups. The number of participants who performed all actions correctly was 0 (0%) control vs. 10 (20%) exposed (difference 20%, 95% confidence interval [CI] 8.9-31.1%, ptraining is associated with improved performance of compression-only CPR. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Performance and Preference with Various VDT (Video Display Terminal) Phosphors

    Science.gov (United States)

    1987-04-24

    aesthetic judgment, but that ignore psycnophysics and task requirements, can degrade visual performance and comfort." The quality of a retinal image depends...K.N, (1961). Poveal contrast thresholds with blurring of the retinal image and increasing size of test stimulus. J. Opt. Soc. Am. 51:862-869. Ohlsson...Van Nes, F.L. (1986). Space, colour and typography on visual display terminals. Behav. and Info, Tech. 5: 99-118. Van Nes, F.L. and Bouman, M.A. (1967

  1. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    Science.gov (United States)

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  2. A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities

    Directory of Open Access Journals (Sweden)

    Antonio Sánchez-Esguevillas

    2012-08-01

    Full Text Available This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  3. A System to Generate SignWriting for Video Tracks Enhancing Accessibility of Deaf People

    Directory of Open Access Journals (Sweden)

    Elena Verdú

    2017-12-01

    Full Text Available Video content has increased much on the Internet during last years. In spite of the efforts of different organizations and governments to increase the accessibility of websites, most multimedia content on the Internet is not accessible. This paper describes a system that contributes to make multimedia content more accessible on the Web, by automatically translating subtitles in oral language to SignWriting, a way of writing Sign Language. This system extends the functionality of a general web platform that can provide accessible web content for different needs. This platform has a core component that automatically converts any web page to a web page compliant with level AA of WAI guidelines. Around this core component, different adapters complete the conversion according to the needs of specific users. One adapter is the Deaf People Accessibility Adapter, which provides accessible web content for the Deaf, based on SignWritting. Functionality of this adapter has been extended with the video subtitle translator system. A first prototype of this system has been tested through different methods including usability and accessibility tests and results show that this tool can enhance the accessibility of video content available on the Web for Deaf people.

  4. Fast compressed domain motion detection in H.264 video streams for video surveillance applications

    DEFF Research Database (Denmark)

    Szczerba, Krzysztof; Forchhammer, Søren; Støttrup-Andersen, Jesper

    2009-01-01

    This paper presents a novel approach to fast motion detection in H.264/MPEG-4 advanced video coding (AVC) compressed video streams for IP video surveillance systems. The goal is to develop algorithms which may be useful in a real-life industrial perspective by facilitating the processing of large...... on motion vectors embedded in the video stream without requiring a full decoding and reconstruction of video frames. To improve the robustness to noise, a confidence measure based on temporal and spatial clues is introduced to increase the probability of correct detection. The algorithm was tested on indoor...

  5. Studying the Recent Improvements in Holograms for Three-Dimensional Display

    Directory of Open Access Journals (Sweden)

    Hamed Abbasi

    2014-01-01

    Full Text Available Displayers tend to become three-dimensional. The most advantage of holographic 3D displays is the possibility to observe 3D images without using glasses. The quality of created images by this method has surprised everyone. In this paper, the experimental steps of making a transmission hologram have been mentioned. In what follows, current advances of this science-art will be discussed. The aim of this paper is to study the recent improvements in creating three-dimensional images and videos by means of holographic techniques. In the last section we discuss the potentials of holography to be applied in future.

  6. European display scene

    Science.gov (United States)

    Bartlett, Christopher T.

    2000-08-01

    The manufacture of Flat Panel Displays (FPDs) is dominated by Far Eastern sources, particularly in Active Matrix Liquid Crystal Displays (AMLCD) and Plasma. The United States has a very powerful capability in micro-displays. It is not well known that Europe has a very active research capability which has lead to many innovations in display technology. In addition there is a capability in display manufacturing of organic technologies as well as the licensed build of Japanese or Korean designs. Finally, Europe has a display systems capability in military products which is world class.

  7. Smart Streaming for Online Video Services

    OpenAIRE

    Chen, Liang; Zhou, Yipeng; Chiu, Dah Ming

    2013-01-01

    Bandwidth consumption is a significant concern for online video service providers. Practical video streaming systems usually use some form of HTTP streaming (progressive download) to let users download the video at a faster rate than the video bitrate. Since users may quit before viewing the complete video, however, much of the downloaded video will be "wasted". To the extent that users' departure behavior can be predicted, we develop smart streaming that can be used to improve user QoE with ...

  8. Growth, development, reproduction, physiological and behavioural studies on living organisms, human adults and children exposed to radiation from video displays

    International Nuclear Information System (INIS)

    Laverdure, A.M.; Surbeck, J.; North, M.O.; Tritto, J.

    2001-01-01

    Various living organisms, human workers and children were tested for any biological action resulting from exposure to radiation from video display terminals (VDTs). VDTs were powered by a 50-Hz alternating voltage of 220 V. Measured electric and magnetic fields were 13 V/M and 50 nT, respectively. Living organisms were maintained under their normal breeding conditions and control values were obtained before switching on the VDT. Various effects related to the irradiation time were demonstrated, i.e. growth delay in algae and Drosophila, a body weight deficiency in rats, abnormal peaks of mortality in Daphnia and Drosophila, teratological effects in chick embryos and behavioural disturbances in rats. The embryonic and neonatal periods showed a high sensitivity to the VDT radiation. In humans, after 4 h of working in front of a VDT screen, an increase in tiredness and a decrease in the resistance of the immune system were observed in workers. In prepubertal children, 20 min of exposure were sufficient to induce neuropsychological disturbances; pre-pubertal young people appear to be particularly sensitive to the effect of the radiation. In human testicular biopsies cultured in vitro for 24 h in front of a VDT screen, mitotic and meiotic disturbances, the appearance of degeneration in some aspects of the cells and significant disorganisation of the seminiferous tubules were demonstrated and related to modification of the metabolism of the sample. An experimental apparatus has been developed and tested that aims to prevent the harm from VDT radiation. Known commercially as the 'emf-Bioshield', it ensures effective protection against harmful biological effects of VDT radiation. (author)

  9. Operational experience with a high speed video data acquisition system in Fermilab experiment E-687

    International Nuclear Information System (INIS)

    Baumbaugh, A.E.; Knickerbocker, K.L.; Baumbaugh, B.; Ruchti, R.

    1987-01-01

    Operation of a high speed, triggerable, Video Data Acquisition System (VDAS) including a hardware data compactor and a 16 megabyte First-In-First-Out buffer memory (FIFO) will be discussed. Active target imaging techniques for High Energy Physics are described and preliminary experimental data is reported.. The hardware architecture for the imaging system and experiment will be discussed as well as other applications for the imaging system. Data rates for the compactor is over 30 megabytes/sec and the FIFO has been run at 100 megabytes/sec. The system can be operated at standard video rates or at any rate up to 30 million pixels/second. 7 refs., 3 figs

  10. Scorebox extraction from mobile sports videos using Support Vector Machines

    Science.gov (United States)

    Kim, Wonjun; Park, Jimin; Kim, Changick

    2008-08-01

    Scorebox plays an important role in understanding contents of sports videos. However, the tiny scorebox may give the small-display-viewers uncomfortable experience in grasping the game situation. In this paper, we propose a novel framework to extract the scorebox from sports video frames. We first extract candidates by using accumulated intensity and edge information after short learning period. Since there are various types of scoreboxes inserted in sports videos, multiple attributes need to be used for efficient extraction. Based on those attributes, the optimal information gain is computed and top three ranked attributes in terms of information gain are selected as a three-dimensional feature vector for Support Vector Machines (SVM) to distinguish the scorebox from other candidates, such as logos and advertisement boards. The proposed method is tested on various videos of sports games and experimental results show the efficiency and robustness of our proposed method.

  11. Development of Information Display System for Operator Support in Severe Accident

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwang Il; Lee, Joon Ku [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    When the severe accident occurs, the technical support center (TSC) performs the mitigation strategy with severe accident management guidelines (SAMG) and communicates with main control room (MCR) operators to obtain information of plant's status. In such circumstances, the importance of an information display for severe accident is increased. Therefore an information display system dedicated to severe accident conditions is required to secure the plant information, to provide the necessary information to MCR operators and TSC operators, and to support the decision using these information. We setup the design concept of severe accident information display system (SIDS) in the previous study and defined its requirements of function and performance. This paper describes the process, results of the identification of the severe accident information for MCR operator and the implementation of SIDS. Further implementation on post-accident monitoring function and data validation function for severe accidents will be accomplished in the future.

  12. Virtual Display Design and Evaluation of Clothing: A Design Process Support System

    Science.gov (United States)

    Zhang, Xue-Fang; Huang, Ren-Qun

    2014-01-01

    This paper proposes a new computer-aided educational system for clothing visual merchandising and display. It aims to provide an operating environment that supports the various stages of display design in a user-friendly and intuitive manner. First, this paper provides a brief introduction to current software applications in the field of…

  13. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  14. Video Texture Synthesis Based on Flow-Like Stylization Painting

    Directory of Open Access Journals (Sweden)

    Qian Wenhua

    2014-01-01

    Full Text Available The paper presents an NP-video rendering system based on natural phenomena. It provides a simple nonphotorealistic video synthesis system in which user can obtain a flow-like stylization painting and infinite video scene. Firstly, based on anisotropic Kuwahara filtering in conjunction with line integral convolution, the phenomena video scene can be rendered to flow-like stylization painting. Secondly, the methods of frame division, patches synthesis, will be used to synthesize infinite playing video. According to selection examples from different natural video texture, our system can generate stylized of flow-like and infinite video scenes. The visual discontinuities between neighbor frames are decreased, and we also preserve feature and details of frames. This rendering system is easy and simple to implement.

  15. Improved Second-Generation 3-D Volumetric Display System. Revision 2

    Science.gov (United States)

    1998-10-01

    computer control, uses infrared lasers to address points within a rare-earth-infused solid glass cube. Already, simple animated computer-generated images...Volumetric Display System permits images to be displayed in a three- dimensional format that can be observed without the use of special glasses . Its...MM 120 nm 60 mm nI POLARIZING I $-"• -’’""BEAMSPLI’i-ER ) 4P40-MHz 50-MHz BW PLRZN i TeO2 MODULATORS TeO2 DEFLECTORS Figure 1-4. NEOS four-channel

  16. Hybrid digital-analog video transmission in wireless multicast and multiple-input multiple-output system

    Science.gov (United States)

    Liu, Yu; Lin, Xiaocheng; Fan, Nianfei; Zhang, Lin

    2016-01-01

    Wireless video multicast has become one of the key technologies in wireless applications. But the main challenge of conventional wireless video multicast, i.e., the cliff effect, remains unsolved. To overcome the cliff effect, a hybrid digital-analog (HDA) video transmission framework based on SoftCast, which transmits the digital bitstream with the quantization residuals, is proposed. With an effective power allocation algorithm and appropriate parameter settings, the residual gains can be maximized; meanwhile, the digital bitstream can assure transmission of a basic video to the multicast receiver group. In the multiple-input multiple-output (MIMO) system, since nonuniform noise interference on different antennas can be regarded as the cliff effect problem, ParCast, which is a variation of SoftCast, is also applied to video transmission to solve it. The HDA scheme with corresponding power allocation algorithms is also applied to improve video performance. Simulations show that the proposed HDA scheme can overcome the cliff effect completely with the transmission of residuals. What is more, it outperforms the compared WSVC scheme by more than 2 dB when transmitting under the same bandwidth, and it can further improve performance by nearly 8 dB in MIMO when compared with the ParCast scheme.

  17. Video motion detection for physical security applications

    International Nuclear Information System (INIS)

    Matter, J.C.

    1990-01-01

    Physical security specialists have been attracted to the concept of video motion detection for several years. Claimed potential advantages included additional benefit from existing video surveillance systems, automatic detection, improved performance compared to human observers, and cost-effectiveness. In recent years, significant advances in image-processing dedicated hardware and image analysis algorithms and software have accelerated the successful application of video motion detection systems to a variety of physical security applications. Early video motion detectors (VMDs) were useful for interior applications of volumetric sensing. Success depended on having a relatively well-controlled environment. Attempts to use these systems outdoors frequently resulted in an unacceptable number of nuisance alarms. Currently, Sandia National Laboratories (SNL) is developing several advanced systems that employ image-processing techniques for a broader set of safeguards and security applications. The Target Cueing and Tracking System (TCATS), the Video Imaging System for Detection, Tracking, and Assessment (VISDTA), the Linear Infrared Scanning Array (LISA); the Mobile Intrusion Detection and Assessment System (MIDAS), and the Visual Artificially Intelligent Surveillance (VAIS) systems are described briefly

  18. Evaluation of the Educational Value of YouTube Videos About Physical Examination of the Cardiovascular and Respiratory Systems

    OpenAIRE

    Azer, Samy A; AlGrain, Hala A; AlKhelaif, Rana A; AlEshaiwi, Sarah M

    2013-01-01

    Background A number of studies have evaluated the educational contents of videos on YouTube. However, little analysis has been done on videos about physical examination. Objective This study aimed to analyze YouTube videos about physical examination of the cardiovascular and respiratory systems. It was hypothesized that the educational standards of videos on YouTube would vary significantly. Methods During the period from November 2, 2011 to December 2, 2011, YouTube was searched by three ass...

  19. Enhancing Scalability in On-Demand Video Streaming Services for P2P Systems

    Directory of Open Access Journals (Sweden)

    R. Arockia Xavier Annie

    2012-01-01

    Full Text Available Recently, many video applications like video telephony, video conferencing, Video-on-Demand (VoD, and so forth have produced heterogeneous consumers in the Internet. In such a scenario, media servers play vital role when a large number of concurrent requests are sent by heterogeneous users. Moreover, the server and distributed client systems participating in the Internet communication have to provide suitable resources to heterogeneous users to meet their requirements satisfactorily. The challenges in providing suitable resources are to analyze the user service pattern, bandwidth and buffer availability, nature of applications used, and Quality of Service (QoS requirements for the heterogeneous users. Therefore, it is necessary to provide suitable techniques to handle these challenges. In this paper, we propose a framework for peer-to-peer- (P2P- based VoD service in order to provide effective video streaming. It consists of four functional modules, namely, Quality Preserving Multivariate Video Model (QPMVM for efficient server management, tracker for efficient peer management, heuristic-based content distribution, and light weight incentivized sharing mechanism. The first two of these modules are confined to a single entity of the framework while the other two are distributed across entities. Experimental results show that the proposed framework avoids overloading the server, increases the number of clients served, and does not compromise on QoS, irrespective of the fact that the expected framework is slightly reduced.

  20. DASS: A decision aid integrating the safety parameter display system and emergency functional recovery procedures. Final report

    International Nuclear Information System (INIS)

    Johnson, S.E.

    1984-08-01

    Using a stand-alone developmental test-bed consisting of a minicomputer and a high-resolution color graphics computer, displays and supporting software incorporating advanced on-line decision-aid concepts were developed and evaluated. The advanced concepts embodied in displays designed for the operating crew of a PWR plant include: (1) an integrated display format which supports a top-down approach to problem detection, recovery planning, and control; (2) introduction of nonobservable plant parameters derived from first principles mass and energy balances as part of the displayed information; and (3) systematic processing and display of key success path (plant safety system) attributes. The prototype system, referred to as the PWR-DASS (Disturbance Analysis and Surveillance System), consists of 18 displays targeted for principal use by the control room systems manager. PWR-DASS was conceived to fulfill an operational void not fully supported by safety parameter display systems or reformulated emergency procedure guidelines. The results from the evaluation by licensed operators suggest that organization and display of desired critical safety function and success path information as incorporated in the PWR-DASS prototype can support the systems manager's overview. The results also point to the need for several refinements required for a field grade system, and to the need for a simulator-based evaluation of the prototype or its successor. (author)