Spector, B.; Eilbert, L.; Finando, S.; Fukuda, F.
A Video Integrated Measurement (VIM) System is described which incorporates the use of various noninvasive diagnostic procedures (moire contourography, electromyography, posturometry, infrared thermography, etc.), used individually or in combination, for the evaluation of neuromusculoskeletal and other disorders and their management with biofeedback and other therapeutic procedures. The system provides for measuring individual diagnostic and therapeutic modes, or multiple modes by split screen superimposition, of real time (actual) images of the patient and idealized (ideal-normal) models on a video monitor, along with analog and digital data, graphics, color, and other transduced symbolic information. It is concluded that this system provides an innovative and efficient method by which the therapist and patient can interact in biofeedback training/learning processes and holds considerable promise for more effective measurement and treatment of a wide variety of physical and behavioral disorders.
designed to extend the state-of-the-art in the area of thin film electroluminescent display systems. The program entails two major areas of efforts...the inclusion of the residual gases in thin film is very likely and dependent upon the concentrations of the gases and the reactive nature of the...reproducible films . Some exploratory work was also performed on the feasitility of applying a ZnTe /Te system black layer with TFEL structure. Pre
Brown, Michael A.
With the advent of broadcast television as a constant source of information throughout the NASA manned space flight Mission Control Center (MCC) at the Johnson Space Center (JSC), the current Video Transport System (VTS) characteristics provides the ability to visually enhance real-time applications as a broadcast channel that decision making flight controllers come to rely on, but can be difficult to maintain and costly. The Operations Technology Facility (OTF) of the Mission Operations Facility Division (MOFD) has been tasked to provide insight to new innovative technological solutions for the MCC environment focusing on alternative architectures for a VTS. New technology will be provided to enable sharing of all imagery from one specific computer display, better known as Display Sharing (DS), to other computer displays and display systems such as; large projector systems, flight control rooms, and back supporting rooms throughout the facilities and other offsite centers using IP networks. It has been stated that Internet Protocol (IP) applications are easily readied to substitute for the current visual architecture, but quality and speed may need to be forfeited for reducing cost and maintainability. Although the IP infrastructure can support many technologies, the simple task of sharing ones computer display can be rather clumsy and difficult to configure and manage to the many operators and products. The DS process shall invest in collectively automating the sharing of images while focusing on such characteristics as; managing bandwidth, encrypting security measures, synchronizing disconnections from loss of signal / loss of acquisitions, performance latency, and provide functions like, scalability, multi-sharing, ease of initial integration / sustained configuration, integration with video adjustments packages, collaborative tools, host / recipient controllability, and the utmost paramount priority, an enterprise solution that provides ownership to the whole
Xie, Ruobing; Li, Li; Jin, Weiqi; Guo, Hong
It is prevalent for the low-light night-vision helmet to equip the binocular viewer with image intensifiers. Such equipment can not only acquire night vision ability, but also obtain the sense of stereo vision to achieve better perception and understanding of the visual field. However, since the image intensifier is for direct-observation, it is difficult to apply the modern image processing technology. As a result, developing digital video technology in night vision is of great significance. In this paper, we design a low-light night-vision helmet with digital imaging device. It consists of three parts: a set of two low-illumination CMOS cameras, a binocular OLED micro display and an image processing PCB. Stereopsis is achieved through the binocular OLED micro display. We choose Speed-Up Robust Feature (SURF) algorithm for image registration. Based on the image matching information and the cameras' calibration parameters, disparity can be calculated in real-time. We then elaborately derive the constraints of binocular stereo display. The sense of stereo vision can be obtained by dynamically adjusting the content of the binocular OLED micro display. There is sufficient space for function extensions in our system. The performance of this low-light night-vision helmet can be further enhanced in combination with The HDR technology and image fusion technology, etc.
In this paper we present an automatic enhanced video display and navigation capability for networked streaming video and networked video playlists. Our proposed method uses Synchronized Multimedia Integration Language (SMIL) as presentation language and Real Time Streaming Protocol (RTSP) as network remote control protocol to automatically generate a "enhanced video strip" display for easy navigation. We propose and describe two approaches - a smart client approach and a smart server approach. We also describe a prototype system implementation of our proposed approach.
Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.
In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.
Wax, David B; Hill, Bryan; Levin, Matthew A
Medical hardware and software device interoperability standards are not uniform. The result of this lack of standardization is that information available on clinical devices may not be readily or freely available for import into other systems for research, decision support, or other purposes. We developed a novel system to import discrete data from an anesthesia machine ventilator by capturing images of the graphical display screen and using image processing to extract the data with off-the-shelf hardware and open-source software. We were able to successfully capture and verify live ventilator data from anesthesia machines in multiple operating rooms and store the discrete data in a relational database at a substantially lower cost than vendor-sourced solutions.
Rehman, Abdul; Zeng, Kai; Wang, Zhou
Today's viewers consume video content from a variety of connected devices, including smart phones, tablets, notebooks, TVs, and PCs. This imposes significant challenges for managing video traffic efficiently to ensure an acceptable quality-of-experience (QoE) for the end users as the perceptual quality of video content strongly depends on the properties of the display device and the viewing conditions. State-of-the-art full-reference objective video quality assessment algorithms do not take into account the combined impact of display device properties, viewing conditions, and video resolution while performing video quality assessment. We performed a subjective study in order to understand the impact of aforementioned factors on perceptual video QoE. We also propose a full reference video QoE measure, named SSIMplus, that provides real-time prediction of the perceptual quality of a video based on human visual system behaviors, video content characteristics (such as spatial and temporal complexity, and video resolution), display device properties (such as screen size, resolution, and brightness), and viewing conditions (such as viewing distance and angle). Experimental results have shown that the proposed algorithm outperforms state-of-the-art video quality measures in terms of accuracy and speed.
Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.
Bachnak, Rafic; Chakinarapu, Ramya; Garcia, Mario; Kar, Dulal; Nguyen, Tien
This paper describes the development of a prototype that takes in an analog National Television System Committee (NTSC) video signal generated by a video camera and data acquired by a microcontroller and display them in real-time on a digital panel. An 8051 microcontroller is used to acquire power dissipation by the display panel, room temperature, and camera zoom level. The paper describes the major hardware components and shows how they are interfaced into a functional prototype. Test data results are presented and discussed.
Sawyer, Kevin; Jacobsen, Robert; Aiken, Edwin W. (Technical Monitor)
NASA Ames Research Center and the US Army are developing the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) using a Sikorsky UH-60 helicopter for the purpose of flight systems research. A primary use of the RASCAL is in-flight simulation for which the visual scene will use computer generated imagery and synthetic vision. This research is made possible in part to a full color wide field of view Helmet Mounted Display (HMD) system that provides high performance color imagery suitable for daytime operations in a flight-rated package. This paper describes the design and performance characteristics of the HMD system. Emphasis is placed on the design specifications, testing, and integration into the aircraft of Kaiser Electronics' RASCAL HMD system that was designed and built under contract for NASA. The optical performance and design of the Helmet mounted display unit will be discussed as well as the unique capabilities provided by the system's Programmable Display Generator (PDG).
on local LED-LCD backlight. Second, removing the digital video codec artifacts such as blocking and ringing artifacts by post-processing algorithms. A novel algorithm based on image features with optimal balance between visual quality and power consumption was developed. In addition, to remove flickering......The quality of digital images and video signal on visual media such as TV screens and LCD displays is affected by two main factors; the display technology and compression standards. Accurate knowledge about the characteristics of display and the video signal can be utilized to develop advanced...... algorithms for signal (image or video) enhancement. One particular application of such algorithms is the case of LCDs with dynamic local backlight. The thesis addressed two main problems; first, designing algorithms that improve the visual quality of perceived image and video and reduce power consumption...
Kellogg, Gary V.; Wagner, Charles A.
Report describes experiment on subjective effects of rates at which display on cathode-ray tube in flight simulator updated and refreshed. Conducted to learn more about jumping, blurring, flickering, and multiple lines that observer perceives when line moves at high speed across screen of a calligraphic CRT.
design, several variations in overlay were either observed, mentioned in conversation, or came to mind, and these include: (a) pointing to a menu of...Schmandt, C. (1980), "Soft Typography ", Information Processing , S.H. Lavington (ed.), North-Holland Publishing Co., pp. 1027-1031. Describes a method...first .. i symbol in the menu along the bottom of the screen and has then touched the displayed map where that symbol is to apppear. The lower photo
Smalley, Daniel E.; Smithwick, Quinn Y. J.; Bove, V. Michael, Jr.
We introduce a new holo-video display architecture ("Mark III") developed at the MIT Media Laboratory. The goal of the Mark III project is to reduce the cost and size of a holo-video display, making it into an inexpensive peripheral to a standard desktop PC or game machine which can be driven by standard graphics chips. Our new system is based on lithium niobate guided-wave acousto-optic devices, which give twenty or more times the bandwidth of the tellurium dioxide bulk-wave acousto-optic modulators of our previous displays. The novel display architecture is particularly designed to eliminate the high-speed horizontal scanning mechanism that has traditionally limited the scalability of Scophony- style video displays. We describe the system architecture and the guided-wave device, explain how it is driven by a graphics chip, and present some early results.
exposure to electromagnetic emission; ergonomic ... problems. This study was aimed at investigating the most prevalent visual symptoms encountered among Video Display Terminal (VDT) users in Owerri municipality prior ... problems and the predisposing factors be introduced and sustained, to forestall the outbreak of a.
Schlecht, Leslie E.; Kutler, Paul (Technical Monitor)
This is a proposal for a general use system based, on the SGI IRIS workstation platform, for recording computer animation to videotape. In addition, this system would provide features for simple editing and enhancement. Described here are a list of requirements for the system, and a proposed configuration including the SGI VideoLab Integrator, VideoMedia VLAN animation controller and the Pioneer rewritable laserdisc recorder.
... COMMISSION Certain Video Displays and Products Using and Containing Same Institution of Investigation... importation of certain video displays and products using and containing same by reason of infringement of... after importation of certain video displays and products using and containing same ] that infringe one...
To measure people's reaction times to the nearest millisecond, it is necessary to know exactly when a stimulus is displayed. This article describes how to display stimuli with millisecond accuracy on a normal CRT monitor, using a PC running Linux. A simple C program is presented to illustrate how this may be done within X Windows using the OpenGL rendering system. A test of this system is reported that demonstrates that stimuli may be consistently displayed with millisecond accuracy. An algorithm is presented that allows the exact time of stimulus presentation to be deduced, even if there are relatively large errors in measuring the display time.
Tang, Shou-Jiang; Fehring, Amanda; Mclemore, Mac; Griswold, Michael; Wang, Wanmei; Paine, Elizabeth R; Wu, Ruonan; To, Filip
Modern endoscopy requires video display. Recent miniaturized, ultraportable projectors are affordable, durable, and offer quality image display. Explore feasibility of using ultraportable projectors in endoscopy. Prospective bench-top comparison; clinical feasibility study. Masked comparison study of images displayed via 2 Samsung ultraportable light-emitting diode projectors (pocket-sized SP-HO3; pico projector SP-P410M) and 1 Microvision Showwx-II Laser pico projector. BENCH-TOP FEASIBILITY STUDY: Prerecorded endoscopic video was streamed via computer. CLINICAL COMPARISON STUDY: Live high-definition endoscopy video was simultaneously displayed through each processor onto a standard liquid crystal display monitor and projected onto a portable, pull-down projection screen. Endoscopists, endoscopy nurses, and technicians rated video images; ratings were analyzed by linear mixed-effects regression models with random intercepts. All projectors were easy to set up, adjust, focus, and operate, with no real-time lapse for any. Bench-top study outcomes: Samsung pico preferred to Laser pico, overall rating 1.5 units higher (95% confidence interval [CI] = 0.7-2.4), P < .001; Samsung pocket preferred to Laser pico, 3.3 units higher (95% CI = 2.4-4.1), P < .001; Samsung pocket preferred to Samsung pico, 1.7 units higher (95% CI = 0.9-2.5), P < .001. The clinical comparison study confirmed the Samsung pocket projector as best, with a higher overall rating of 2.3 units (95% CI = 1.6-3.0), P < .001, than Samsung pico. Low brightness currently limits pico projector use in clinical endoscopy. The pocket projector, with higher brightness levels (170 lumens), is clinically useful. Continued improvements to ultraportable projectors will supply a needed niche in endoscopy through portability, reduced cost, and equal or better image quality. © The Author(s) 2013.
Hagelin, Paul M.; Krishnamoorthy, Uma; Conant, Robert A.; Muller, Richard S.; Lau, Kam Y.; Solgaard, Olav
We describe a raster-scanning display system comprised of two tilt-up micromachined polysilicon mirrors that rotate about orthogonal axes. We have demonstrated a resolution of 102 X 119 pixels. The optical efficiency of our two- mirror micro-optical raster-scanning system is comparable to that of micromachined display systems developed by Texas Instruments and Silicon Light Machines. Ease of integration with on-chip light sources and lenses has the potential to reduce packaging size, complexity and cost of the display system and makes it well suited for head-mounted display applications.
Gustafson, Peter C.
For many years, photogrammetry has been in use at TRW. During that time, needs have arisen for highly repetitive measurements. In an effort to satisfy these needs in a timely manner, a specialized Robotic Video Photogrammetry System (RVPS) was developed by TRW in conjunction with outside vendors. The primary application for the RVPS has strict accuracy requirements that demand significantly more images than the previously used film-based system. The time involved in taking these images was prohibitive but by automating the data acquisition process, video techniques became a practical alternative to the more traditional film- based approach. In fact, by applying video techniques, measurement productivity was enhanced significantly. Analysis involved was also brought `on-board' to the RVPS, allowing shop floor acquisition and delivery of results. The RVPS has also been applied in other tasks and was found to make a critical improvement in productivity, allowing many more tests to be run in a shorter time cycle. This paper will discuss the creation of the system and TRW's experiences with the RVPS. Highlighted will be the lessons learned during these efforts and significant attributes of the process not common to the standard application of photogrammetry for industrial measurement. As productivity and ease of use continue to drive the application of photogrammetry in today's manufacturing climate, TRW expects several systems, with technological improvements applied, to be in use in the near future.
Yakimovsky, Y.; Rayfield, M.; Eskenazi, R.
RAPID is a system capable of providing convenient digital analysis of video data in real-time. It has two modes of operation. The first allows for continuous digitization of an EIA RS-170 video signal. Each frame in the video signal is digitized and written in 1/30 of a second into RAPID's internal memory. The second mode leaves the content of the internal memory independent of the current input video. In both modes of operation the image contained in the memory is used to generate an EIA RS-170 composite video output signal representing the digitized image in the memory so that it can be displayed on a monitor.
To improve the local situational awareness (LSA) of personnel in light or heavily armored vehicles, most military organizations recognize the need to equip their fleets with high-resolution digital video systems. Several related upgrade programs are already in progress and, almost invariably, COTS IP/Ethernet is specified as the underlying transport mechanism. The high bandwidths, long reach, networking flexibility, scalability, and affordability of IP/Ethernet make it an attractive choice. There are significant technical challenges, however, in achieving high-performance, real-time video connectivity over the IP/Ethernet platform. As an early pioneer in performance-oriented video systems based on IP/Ethernet, Pleora Technologies has developed core expertise in meeting these challenges and applied a singular focus to innovating within the required framework. The company's field-proven iPORTTM Video Connectivity Solution is deployed successfully in thousands of real-world applications for medical, military, and manufacturing operations. Pleora's latest innovation is eDisplayTM, a smallfootprint, low-power, highly efficient IP engine that acquires video from an Ethernet connection and sends it directly to a standard HDMI/DVI monitor for real-time viewing. More costly PCs are not required. This paper describes Pleora's eDisplay IP Engine in more detail. It demonstrates how - in concert with other elements of the end-to-end iPORT Video Connectivity Solution - the engine can be used to build standards-based, in-vehicle video systems that increase the safety and effectiveness of military personnel while fully leveraging the advantages of the lowcost COTS IP/Ethernet platform.
This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV
Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi
Diaz, Roberto; Yoon, Jang; Chen, Robert; Quinones-Hinojosa, Alfredo; Wharen, Robert; Komotar, Ricardo
Wearable technology interfaces with normal human movement and function, thereby enabling more efficient and adaptable use.We developed a wearable display system for use with intra-operative neuronavigation for brain tumor surgery. The Google glass head-up display system was adapted to surgical loupes with a video-streaming integrated hardware and software device for display of the Stealth S7 navigation screen. Phantom trials of surface ventriculostomy were performed. The device was utilized as an alternative display screen during cranial surgery. Image-guided brain tumor resection was accomplished using Google Glass head-up display of Stealth S7 navigation images. Visual display consists of navigation video-streaming over a wireless network. The integrated system developed for video-streaming permits video data display to the operating surgeon without requiring movement of the head away from the operative field. Google Glass head-up display can be used for intra-operative neuronavigation in the setting of intracranial tumor resection.
... COMMISSION Certain Video Displays, Components Thereof, and Products Containing Same; Notice of Commission... importation of certain video displays, components thereof, or products containing same that infringe one or... Matter of Certain Devices for Connecting Computers via Telephone Lines, Inv. No. 337-TA-360, USITC Pub...
Lydia M. Hopper; Lambeth, Susan P; Schapiro, Steven J.
Video displays for behavioral research lend themselves particularly well to studies with chimpanzees (Pan troglodytes), as their vision is comparable to humans’, yet there has been no formal test of the efficacy of video displays as a form of social information for chimpanzees. To address this, we compared the learning success of chimpanzees shown video footage of a conspecific compared to chimpanzees shown a live conspecific performing the same novel task. Footage of an unfamiliar chimpanzee...
Andryc, K.; Chamberlain, J.; Eagleson, T.; Gottschalk, G.; Kowal, B.; Kuzdeba, P.; LaValley, D.; Myers, E.; Quinn, S.; Rose, M.; Rusiecki, B.
In many situations, the difference between success and failure comes down to taking the right actions quickly. While the myriad of electronic sensors available today can provide data quickly, it may overload the operator; where only a contextualized centralized display of information and intuitive human interface can help to support the quick and effective decisions needed. If these decisions are to result in quick actions, then the operator must be able to understand all of the data of his environment. In this paper we present a novel approach in contextualizing multi-sensor data onto a full motion video real-time 360 degree imaging display. The system described could function as a primary display system for command and control in security, military and observation posts. It has the ability to process and enable interactive control of multiple other sensor systems. It enhances the value of these other sensors by overlaying their information on a panorama of the surroundings. Also, it can be used to interface to other systems including: auxiliary electro-optical systems, aerial video, contact management, Hostile Fire Indicators (HFI), and Remote Weapon Stations (RWS).
Roebuck, Andrew J; Dubnyk, Aurora J B; Cochran, David; Mandryk, Regan L; Howland, John G; Harms, Victoria
Research in asymmetrical visuospatial attention has identified a leftward bias in the general population across a variety of measures including visual attention and line-bisection tasks. In addition, increases in rightward collisions, or bumping, during visuospatial navigation tasks have been demonstrated in real world and virtual environments. However, little research has investigated these biases beyond the laboratory. The present study uses a semi-naturalistic approach and the online video game streaming service Twitch to examine navigational errors and assaults as skilled action video game players (n = 60) compete in Counter Strike: Global Offensive. This study showed a significant rightward bias in both fatal assaults and navigational errors. Analysis using the in-game ranking system as a measure of skill failed to show a relationship between bias and skill. These results suggest that a leftward visuospatial bias may exist in skilled players during online video game play. However, the present study was unable to account for some factors such as environmental symmetry and player handedness. In conclusion, video game streaming is a promising method for behavioural research in the future, however further study is required before one can determine whether these results are an artefact of the method applied, or representative of a genuine rightward bias.
Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))
The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.
Gu, Guohua; Chen, Qian; Bai, Lianfa; Zhang, Baomin
A new technique, which combined with scanning and quasi- static method is presented. By the technique, when applied in true color panel video display, the matching of three primary colors can be more suitable, the brightness of the whole screen can be higher. At the same time, because of the using of scanning method, the cost of the system is not so high as that of ordinary system. Sampling of high distinguishability data, shaping and enhancing of image outline, nonlinear rectify of the primary colors are also discussed here.
Golightly, M.; Raben, V.; Weyland, M.
The Solar Active Region Display System (SARDS) is a client-server application that automatically collects a wide range of solar data and displays it in a format easy for users to assimilate and interpret. Users can rapidly identify active regions of interest or concern from color-coded indicators that visually summarize each region's size, magnetic configuration, recent growth history, and recent flare and CME production. The active region information can be overlaid onto solar maps, multiple solar images, and solar difference images in orthographic, Mercator or cylindrical equidistant projections. Near real-time graphs display the GOES soft and hard x-ray flux, flare events, and daily F10.7 value as a function of time; color-coded indicators show current trends in soft x-ray flux, flare temperature, daily F10.7 flux, and x-ray flare occurrence. Through a separate window up to 4 real-time or static graphs can simultaneously display values of KP, AP, daily F10.7 flux, GOES soft and hard x-ray flux, GOES >10 and >100 MeV proton flux, and Thule neutron monitor count rate. Climatologic displays use color-valued cells to show F10.7 and AP values as a function of Carrington/Bartel's rotation sequences - this format allows users to detect recurrent patterns in solar and geomagnetic activity as well as variations in activity levels over multiple solar cycles. Users can customize many of the display and graph features; all displays can be printed or copied to the system's clipboard for "pasting" into other applications. The system obtains and stores space weather data and images from sources such as the NOAA Space Environment Center, NOAA National Geophysical Data Center, the joint ESA/NASA SOHO spacecraft, and the Kitt Peak National Solar Observatory, and can be extended to include other data series and image sources. Data and images retrieved from the system's database are converted to XML and transported from a central server using HTTP and SOAP protocols, allowing
a member of the American Y Codes Society of Photogrammetry . D va±1 apnd/or ABSTRACT A cooperative effort between four government recently resulted in...video tapes# to movie film, to transparencies, to paper photographic prints, to paper maps, charts, and documents. Bach of these media has its own space...perspective terrain views, engineering "* drawihgs, harbor charts, ground photographs, slides, movies , video tapes# documents, and organizaticnal logos
Korhonen, Jari; Mantel, Claire; Burini, Nino
Objective image and video quality metrics focus mostly on the digital representation of the signal. However, the display characteristics are also essential for the overall Quality of Experience (QoE). In this paper, we use a model of a backlight dimming system for Liquid Crystal Display (LCD......) and show how the modeled image can be used as an input to quality assessment algorithms. For quality assessment, we propose an image quality metric, based on Peak Signal-to-Noise Ratio (PSNR) computation in the CIE L*a*b* color space. The metric takes luminance reduction, color distortion and loss...... of uniformity in the resulting image in consideration. Subjective evaluations of images generated using different backlight dimming algorithms and clipping strategies show that the proposed metric estimates the perceived image quality more accurately than conventional PSNR....
Mantel, Claire; Bech, Søren; Korhonen, Jari
signal can then be used as input to objective quality metrics. The focus of this paper is on determining which characteristics of locally backlit displays influence quality assessment. A subjective experiment assessing the quality of highly contrasted videos displayed with various local backlight...
Castellano, Timothy P.
A document describes a simple night sky display system that is portable, lightweight, and includes, at most, four components in its simplest configuration. The total volume of this system is no more than 10(sup 6) cm(sup 3) in a disassembled state, and weighs no more than 20 kilograms. The four basic components are a computer, a projector, a spherical light-reflecting first surface and mount, and a spherical second surface for display. The computer has temporary or permanent memory that contains at least one signal representing one or more images of a portion of the sky when viewed from an arbitrary position, and at a selected time. The first surface reflector is spherical and receives and reflects the image from the projector onto the second surface, which is shaped like a hemisphere. This system may be used to simulate selected portions of the night sky, preserving the appearance and kinesthetic sense of the celestial sphere surrounding the Earth or any other point in space. These points will then show motions of planets, stars, galaxies, nebulae, and comets that are visible from that position. The images may be motionless, or move with the passage of time. The array of images presented, and vantage points in space, are limited only by the computer software that is available, or can be developed. An optional approach is to have the screen (second surface) self-inflate by means of gas within the enclosed volume, and then self-regulate that gas in order to support itself without any other mechanical support.
Yang, Fan; Ma, Chunting; Li, Haoyi
The design of a wireless video transmission system based on STM32, the system uses the STM32F103VET6 microprocessor as the core, through the video acquisition module collects video data, video data will be sent to the receiver through the wireless transmitting module, receiving data will be displayed on the LCD screen. The software design process of receiver and transmitter is introduced. The experiment proves that the system realizes wireless video transmission function.
Studies 14:215-223. Santucci, G., Menu , J.P., and Valot, C. (1982). Visual acuity in color contrast on cathode ray tubes: role of luminance, hue, and...Van Nes, F.L. (1986). Space, colour and typography on visual display terminals. Behav. and Info, Tech. 5: 99-118. Van Nes, F.L. and Bouman, M.A. (1967
Hopper, Lydia M; Lambeth, Susan P; Schapiro, Steven J
Video displays for behavioral research lend themselves particularly well to studies with chimpanzees (Pan troglodytes), as their vision is comparable to humans', yet there has been no formal test of the efficacy of video displays as a form of social information for chimpanzees. To address this, we compared the learning success of chimpanzees shown video footage of a conspecific compared to chimpanzees shown a live conspecific performing the same novel task. Footage of an unfamiliar chimpanzee operating a bidirectional apparatus was presented to 24 chimpanzees (12 males, 12 females), and their responses were compared to those of a further 12 chimpanzees given the same task but with no form of information. Secondly, we also compared the responses of the chimpanzees in the video display condition to responses of eight chimpanzees from a previously published study of ours, in which chimpanzees observed live models. Chimpanzees shown a video display were more successful than those in the control condition and showed comparable success to those that saw a live model. Regarding fine-grained copying (i.e. the direction that the door was pushed), only chimpanzees that observed a live model showed significant matching to the model's methods with their first response. Yet, when all the responses made by the chimpanzees were considered, comparable levels of matching were shown by chimpanzees in both the live and video conditions. © 2012 Wiley Periodicals, Inc.
Meyer, R.H.; Bauhs, K.C.
This report describes the HP370 component of the Enhanced Graphics System (EGS) used at Tonopah Test Range (TTR). Selected Radar data is fed into the computer systems and the resulting tracking symbols are displayed on high-resolution video monitors in real time. These tracking symbols overlay background maps and are used for monitoring/controlling various flight vehicles. This report discusses both the operational aspects and the internal configuration of the HP370 Workstation portion of the EGS system.
Vol. 1457 Stereoscopic Displays and Applications 11, 274-282 (February 1991). 15. Lucente, Marc . "Optimization of Hologram Computaion for Real-Time Dis...play," SPIE Vol. 1667 Practical Holography VI, 32-43 (February 1992). 16. Marraud, A. and M. Bonnet . "Restitution of Stereoscopic Picture by Means
Even barely acceptable quality holographic 3D video displays require hundreds of mega pixels with a pixel size in the order of a fraction of a micrometer, when conventional flat panel SLM arrangement is used. Smaller pixel sizes are essential to get larger diffraction angles. Common flat display panels, however, have pixel sizes in the order of tens of micrometers, and this results in diffraction angles in the order of one degree. Here in this design, an array of commonly available (similar to high-end mobile phone display panels) flat display panels, is used. Each flat panel, as an element of the array, directs its outgoing low-diffraction angle light beam to corresponding small portion of a large size paraboloid mirror; the mirror then reflects the slowly-expanding, information carrying beam to direct it at a certain exit angle; this beam constitutes a portion of the final real ghost-like 3D holographic image. The collection of those components from all such flat display panels cover the entire 360-degrees and thus constitute the final real 3D table-top holographic display with a 360-degrees viewing angle. The size of the resultant display is smaller compared to the physical size of the paraboloid mirror, or the overall size of the display panel array; however, an acceptable size table top display can be easily constructed for living-room viewing. A matching camera can also be designed by reversing the optical paths and by replacing the flat display panels by flat wavefront capture devices.
Cao, Xuan; Geng, Zheng; Li, Tuotuo; Zhang, Mei; Zhang, Zhaoxing
Compressive light field display based on multi-layer LCDs is becoming a popular solution for 3D display. Decomposing light field into layer images is the most challenging task. Iterative algorithm is an effective solver for this high-dimensional decomposition problem. Existing algorithms, however, iterate from random initial values. As such, significant computation time is required due to the deviation between random initial estimate and target values. Real-time 3D display at video rate is difficult based on existing algorithms. In this paper, we present a new algorithm to provide better initial values and accelerate decomposition of light field video. We utilize internal coherence of single light field frame to transfer the ignorance-to-target to a much lower resolution level. In addition, we explored external coherence for further accelerating light field video and achieved 5.91 times speed improvement. We built a prototype and developed parallel algorithm based on CUDA.
Qian, Jia; Sui, Xiubao
This paper presents a high-definition video display solution based on the FPGA and THS8200. THS8200 is a video decoder chip launched by TI company, this chip has three 10-bit DAC channels which can capture video data in both 4:2:2 and 4:4:4 formats, and its data synchronization can be either through the dedicated synchronization signals HSYNC and VSYNC, or extracted from the embedded video stream synchronization information SAV / EAV code. In this paper, we will utilize the address and control signals generated by FPGA to access to the data-storage array, and then the FPGA generates the corresponding digital video signals YCbCr. These signals combined with the synchronization signals HSYNC and VSYNC that are also generated by the FPGA act as the input signals of THS8200. In order to meet the bandwidth requirements of the high-definition TV, we adopt video input in the 4:2:2 format over 2×10-bit interface. THS8200 is needed to be controlled by FPGA with I2C bus to set the internal registers, and as a result, it can generate the synchronous signal that is satisfied with the standard SMPTE and transfer the digital video signals YCbCr into analog video signals YPbPr. Hence, the composite analog output signals YPbPr are consist of image data signal and synchronous signal which are superimposed together inside the chip THS8200. The experimental research indicates that the method presented in this paper is a viable solution for high-definition video display, which conforms to the input requirements of the new high-definition display devices.
Van Calster, L; Van Hoecke, A-S; Octaef, A; Boen, F
This study evaluated the effects of improving the visibility of the stairwell and of displaying a video with a stair climbing model on climbing and descending stair use in a worksite setting. Intervention study. Three consecutive one-week intervention phases were implemented: (1) the visibility of the stairs was improved by the attachment of pictograms that indicated the stairwell; (2) a video showing a stair climbing model was sent to the employees by email; and (3) the same video was displayed on a television screen at the point-of-choice (POC) between the stairs and the elevator. The interventions took place in two buildings. The implementation of the interventions varied between these buildings and the sequence was reversed. Improving the visibility of the stairs increased both stair climbing (+6%) and descending stair use (+7%) compared with baseline. Sending the video by email yielded no additional effect on stair use. By contrast, displaying the video at the POC increased stair climbing in both buildings by 12.5% on average. One week after the intervention, the positive effects on stair climbing remained in one of the buildings, but not in the other. These findings suggest that improving the visibility of the stairwell and displaying a stair climbing model on a screen at the POC can result in a short-term increase in both climbing and descending stair use. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Offering ready access to the security industry's cutting-edge digital future, Intelligent Network Video provides the first complete reference for all those involved with developing, implementing, and maintaining the latest surveillance systems. Pioneering expert Fredrik Nilsson explains how IP-based video surveillance systems provide better image quality, and a more scalable and flexible system at lower cost. A complete and practical reference for all those in the field, this volume:Describes all components relevant to modern IP video surveillance systemsProvides in-depth information about ima
Hsu, Charles; Szu, Harold
An intelligent video surveillance system is able to detect and identify abnormal and alarming situations by analyzing object movement. The Smart Sensing Surveillance Video (S3V) System is proposed to minimize video processing and transmission, thus allowing a fixed number of cameras to be connected on the system, and making it suitable for its applications in remote battlefield, tactical, and civilian applications including border surveillance, special force operations, airfield protection, perimeter and building protection, and etc. The S3V System would be more effective if equipped with visual understanding capabilities to detect, analyze, and recognize objects, track motions, and predict intentions. In addition, alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. The S3V System capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.
Mantel, Claire; Bech, Søren; Korhonen, Jari; Forchhammer, Søren; Pedersen, Jesper Melgaard
Local backlight dimming is a technology aiming at both saving energy and improving visual quality on television sets. As the rendition of the image is specified locally, the numerical signal corresponding to the displayed image needs to be computed through a model of the display. This simulated signal can then be used as input to objective quality metrics. The focus of this paper is on determining which characteristics of locally backlit displays influence quality assessment. A subjective experiment assessing the quality of highly contrasted videos displayed with various local backlight-dimming algorithms is set up. Subjective results are then compared with both objective measures and objective quality metrics using different display models. The first analysis indicates that the most significant objective features are temporal variations, power consumption (probably representing leakage), and a contrast measure. The second analysis shows that modeling of leakage is necessary for objective quality assessment of sequences displayed with local backlight dimming.
Mantel, C.; Bech, Søren; Forchhammer, S.
This paper investigates what composes the quality of videos displayed on LCD with local backlight dimming. In a subjective experiment, participants assessed the level of nine attributes defined using the Qualitative Descriptive Analysis method. Results show that three attributes (Contrast, Change...
Mantel, Claire; Bech, Søren; Forchhammer, Søren
This paper investigates what composes the quality of videos displayed on LCD with local backlight dimming. In a sub- jective experiment, participants assessed the level of nine attributes defined using the Qualitative Descriptive Anal- ysis method. Results show that three attributes (Contrast...
Alsmirat, Mohammad Abdullah
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Full Text Available Conventionally, the camera localization for augmented reality (AR relies on detecting a known pattern within the captured images. In this study, a markerless AR scheme has been designed based on a Stereo Video See-Through Head-Mounted Display (HMD device. The proposed markerless AR scheme can be utilized for medical applications such as training, telementoring, or preoperative explanation. Firstly, a virtual model for AR visualization is aligned to the target in physical space by an improved Iterative Closest Point (ICP based surface registration algorithm, with the target surface structure reconstructed by a stereo camera pair; then, a markerless AR camera localization method is designed based on the Kanade-Lucas-Tomasi (KLT feature tracking algorithm and the Random Sample Consensus (RANSAC correction algorithm. Our AR camera localization method is shown to be better than the traditional marker-based and sensor-based AR environment. The demonstration system was evaluated with a plastic dummy head and the display result is satisfactory for a multiple-view observation.
Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.
Zhao, Heng; Wang, Xiang-jun
This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.
Lee, Jin-Ho; Ko, Young-Chul; Mun, Yong-Kweun; Choi, Byoung-So; Kim, Jong-Min; Jeon, Duk Young
We acquired a two-dimensional (2D) laser vector graphic video image using 1500 μm× 1200 μm silicon scanning mirrors with vertical comb fingers. Vector image signals from the graphic board were applied to two scanning mirrors, and a SHG green laser was directly modulated to shape independent graphic images. These scanning mirrors were originally designed for laser raster video display as a galvanometric vertical scanner, and are controlled perfectly by the ramp waveform of 60 Hz with the duty cycle of 90%.
Su, Ang; Zhang, Yueqiang; Dong, Jing; Xu, Yuhua; Zhu, Xianwei; Zhang, Xiaohu
The high portability of small Unmanned Aircraft Vehicles (UAVs) makes them play an important role in surveillance and reconnaissance tasks, so the military and civilian desires for UAVs are constantly growing. Recently, we have developed a real-time video exploitation system for our small UAV which is mainly used in forest patrol tasks. Our system consists of six key models, including image contrast enhancement, video stabilization, mosaicing, salient target indication, moving target indication, and display of the footprint and flight path on map. Extensive testing on the system has been implemented and the result shows our system performed well.
Ge, Jing; Zhang, Guoping; Yang, Zongkai
Multimedia technology and networks protocol are the basic technology of the video surveillance system. A network remote video surveillance system based on MPEG-4 video coding standards is designed and implemented in this paper. The advantages of the MPEG-4 are analyzed in detail in the surveillance field, and then the real-time protocol and real-time control protocol (RTP/RTCP) are chosen as the networks transmission protocol. The whole system includes video coding control module, playing back module, network transmission module and network receiver module The scheme of management, control and storage about video data are discussed. The DirectShow technology is used to playback video data. The transmission scheme of digital video processing in networks, RTP packaging of MPEG-4 video stream is discussed. The receiver scheme of video date and mechanism of buffer are discussed. The most of the functions are archived by software, except that the video coding control module is achieved by hardware. The experiment results show that it provides good video quality and has the real-time performance. This system can be applied into wide fields.
Hua, Hong; Krishnaswamy, Prasanna; Rolland, Jannick P.
Head pose is utilized to approximate a user’s line-of-sight for real-time image rendering and interaction in most of the 3D visualization applications using head-mounted displays (HMD). The eye often reaches an object of interest before the completion of most head movements. It is highly desirable to integrate eye-tracking capability into HMDs in various applications. While the added complexity of an eyetracked-HMD (ET-HMD) imposes challenges on designing a compact, portable, and robust system, the integration offers opportunities to improve eye tracking accuracy and robustness. In this paper, based on the modeling of an eye imaging and tracking system, we examine the challenges and identify parametric requirements for video-based pupil-glint tracking methods in an ET-HMD design, and predict how these parameters may affect the tracking accuracy, resolution, and robustness. We further present novel methods and associated algorithms that effectively improve eye-tracking accuracy and extend the tracking range.
Collomosse, J.; Kindberg, T.
We present ‘Screen codes’ - a space- and time-efficient, aesthetically compelling method for transferring data from a display to a camera-equipped mobile device. Screen codes encode data as a grid of luminosity fluctuations within an arbitrary image, displayed on the video screen and decoded on a mobile device. These ‘twinkling’ images are a form of ‘visual hyperlink’, by which users can move dynamically generated content to and from their mobile devices. They help bridge the ‘content divide’...
Mocci, F; Serra, A.; Corrias, G
OBJECTIVES—To examine the part played by psychological factors in complaints about visual health reported by banking officers who work at video display terminals (VDTs). METHODS—Out of a population of 385 bank workers, a group of 212 subjects without organic visual disturbances (as determined by opthalmological examination) who share a work environment and job duties was selected. Three questionnaires were administered to these subjects: (a) the NIOSH job stress questionnaire; (b) a questionn...
Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin
Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic
Mintz, Frederick (Inventor); Chao, Tien-Hsin (Inventor); Bryant, Nevin (Inventor); Tsou, Peter (Inventor)
The present invention relates to a three-dimensional (3D) hologram display system. The 3D hologram display system includes a projector device for projecting an image upon a display medium to form a 3D hologram. The 3D hologram is formed such that a viewer can view the holographic image from multiple angles up to 360 degrees. Multiple display media are described, namely a spinning diffusive screen, a circular diffuser screen, and an aerogel. The spinning diffusive screen utilizes spatial light modulators to control the image such that the 3D image is displayed on the rotating screen in a time-multiplexing manner. The circular diffuser screen includes multiple, simultaneously-operated projectors to project the image onto the circular diffuser screen from a plurality of locations, thereby forming the 3D image. The aerogel can use the projection device described as applicable to either the spinning diffusive screen or the circular diffuser screen.
In this review, an odor sensing system and an olfactory display are introduced into people in pharmacy. An odor sensing system consists of an array of sensors with partially overlapping specificities and pattern recognition technique. One of examples of odor sensing systems is a halitosis sensor which quantifies the mixture composition of three volatile sulfide compounds. A halitosis sensor was realized using a preconcentrator to raise sensitivity and an electrochemical sensor array to suppress the influence of humidity. Partial least squares (PLS) method was used to quantify the mixture composition. The experiment reveals that the sufficient accuracy was obtained. Moreover, the olfactory display, which present scents to human noses, is explained. A multi-component olfactory display enables the presentation of a variety of smells. The two types of multi-component olfactory display are described. The first one uses many solenoid valves with high speed switching. The valve ON frequency determines the concentration of the corresponding odor component. The latter one consists of miniaturized liquid pumps and a surface acoustic wave (SAW) atomizer. It enables the wearable olfactory display without smell persistence. Finally, the application of the olfactory display is demonstrated. Virtual ice cream shop with scents was made as a content of interactive art. People can enjoy harmony among vision, audition and olfaction. In conclusion, both odor sensing system and olfactory display can contribute to the field of human health care.
From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated
Flynn, Emma; Whiten, Andrew
Children can learn how to use complex objects by watching others, yet the relative importance of different elements they may observe, such as the interactions of the individual parts of the apparatus, a model's movements, and desirable outcomes, remains unclear. In total, 140 3-year-olds and 140 5-year-olds participated in a study where they observed a video showing tools being used to extract a reward item from a complex puzzle box. Conditions varied according to the elements that could be seen in the video: (a) the whole display, including the model's hands, the tools, and the box; (b) the tools and the box but not the model's hands; (c) the model's hands and the tools but not the box; (d) only the end state with the box opened; and (e) no demonstration. Children's later attempts at the task were coded to establish whether they imitated the hierarchically organized sequence of the model's actions, the action details, and/or the outcome. Children's successful retrieval of the reward from the box and the replication of hierarchical sequence information were reduced in all but the whole display condition. Only once children had attempted the task and witnessed a second demonstration did the display focused on the tools and box prove to be better for hierarchical sequence information than the display focused on the tools and hands only. Copyright © 2013 Elsevier Inc. All rights reserved.
Terakawa, Yuzo; Ishibashi, Kenichi; Goto, Takeo; Ohata, Kenji
Three-dimensional (3-D) video recording of microsurgery is a more promising tool for presentation and education of microsurgery than conventional two-dimensional video systems, but has not been widely adopted partly because 3-D image processing of previous 3-D video systems is complicated and observers without optical devices cannot visualize the 3-D image. A new technical development for 3-D video presentation of microsurgery is described. Microsurgery is recorded with a microscope equipped with a single high-definition (HD) video camera. This 3-D video system records the right- and left-eye views of the microscope simultaneously as single HD data with the use of a 3-D camera adapter: the right- and left-eye views of the microscope are displayed separately on the right and left sides, respectively. The operation video is then edited with video editing software so that the right-eye view is displayed on the left side and left-eye view is displayed on the right side. Consequently, a 3-D video of microsurgery can be created by viewing the edited video by the cross-eyed stereogram viewing method without optical devices. The 3-D microsurgical video provides a more accurate view, especially with regard to depth, and a better understanding of microsurgical anatomy. Although several issues are yet to be addressed, this 3-D video system is a useful method of recording and presenting microsurgery for 3-D viewing with currently available equipment, without optical devices.
O' HARA,J.M.; PIRUS,D.; BELTRATCCHI,L.
This paper discussed the presentation of information in computer-based control rooms. Issues associated with the typical displays currently in use are discussed. It is concluded that these displays should be augmented with new displays designed to better meet the information needs of plant personnel and to minimize the need for interface management tasks (the activities personnel have to do to access and organize the information they need). Several approaches to information design are discussed, specifically addressing: (1) monitoring, detection, and situation assessment; (2) routine task performance; and (3) teamwork, crew coordination, collaborative work.
Woods, Russell L.
Gaze-contingent displays combine a display device with an eyetracking system to rapidly update an image on the basis of the measured eye position. All such systems have a delay, the system latency, between a change in gaze location and the related change in the display. The system latency is the result of the delays contributed by the eyetracker, the display computer, and the display, and it is affected by the properties of each component, which may include variability. We present a direct, simple, and low-cost method to measure the system latency. The technique uses a device to briefly blind the eyetracker system (e.g., for video-based eyetrackers, a device with infrared light-emitting diodes (LED)), creating an eyetracker event that triggers a change to the display monitor. The time between these two events, as captured by a relatively low-cost consumer camera with high-speed video capability (1,000 Hz), is an accurate measurement of the system latency. With multiple measurements, the distribution of system latencies can be characterized. The same approach can be used to synchronize the eye position time series and a video recording of the visual stimuli that would be displayed in a particular gaze-contingent experiment. We present system latency assessments for several popular types of displays and discuss what values are acceptable for different applications, as well as how system latencies might be improved. PMID:23949955
... From the Federal Register Online via the Government Publishing Office FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 76 Open Video Systems AGENCY: Federal Communications Commission. ACTION: Final rule... Open Video Systems. DATES: The amendments to 47 CFR 76.1505(d) and 76.1506(d), (l)(3), and (m)(2...
Fonseca, Diana; Kraus, Martin
The present study is designed to test how immersion, presence, and narrative content (with a focus on emotional immersion) can affect one's pro-environmental attitude and behavior with specific interest in 360° videos and meat consumption as a non pro-environmental behavior. This research describes...... a between-group design experiment that compares two systems with different levels of immersion and two types of narratives, one with and one without emotional content. In the immersive video (IV) condition (high immersion), 21 participants used a Head-Mounted Display (HMD) to watch an emotional 360° video...... about meat consumption and its effects on the environment; another 21 participants experienced the tablet condition (low immersion) where they viewed the same video but with a 10.1 inch tablet; 22 participants in the control condition viewed a non emotional video about submarines with an HMD...
Basu, Rivu; Dasgupta, Aparajita; Ghosal, Gautam
IT has revolutionized economies throughout the world, more so in India. West Bengal has also got its share of IT boom. But with this, it has brought in the class of human resource of Video Display Terminal workers operators that and along with that can cause a host of occupational problems in them namely musculoskeletal, ocular and psychological systems. The current study had assessed some of the musculoskeletal disorders occurring due to VIDEO DISPLAY TERMINAL use. An analytical cross-sectional study was done in a Software Company of Sector V, Kolkata, the IT hub of West Bengal. Of all the employees, required sample size of 206 was selected by Simple Random Sampling. After proper permissions and consent, socio-demographic variables were collected by standardized instruments, musculoskeletal morbidity was collected by Nordic questionnaire, and ergonomic practices were obtained by checklists. 90.78% of population showed some form of musculo skeletal symptoms. They were highest in fingers, elbows, wrist, shoulder, upper, while legs and lower back showed low morbidities. Increasing age, female sex, increasing years of work, repetition of work, poorer ergonomic scores all showed to have increased the symptoms. The regionwise ergonomic scores revealed how the poorer scores affected the musculo skeletal systems adversely. Several individual adverse ergonomic practices were also elicited. The study goes hand to hand with many other studies throughout the world and also in India. However, a much higher morbidity has been found in this study probably due to a symptom based questionnaire. The adverse practices obtained here goes well with other relevant studies. This study puts occupational health problems of VIDEO DISPLAY TERMINAL users, and upholds the need of future multicentric cohort studies along with implementation of proper measures to ameliorate the effects of this occupational hazard.
Dasgupta, Aparajita; Ghosal, Gautam
Introduction: IT has revolutionized economies throughout the world, more so in India. West Bengal has also got its share of IT boom. But with this, it has brought in the class of human resource of Video Display Terminal workers operators that and along with that can cause a host of occupational problems in them namely musculoskeletal, ocular and psychological systems. The current study had assessed some of the musculoskeletal disorders occurring due to VIDEO DISPLAY TERMINAL use. Materials and Methods: An analytical cross-sectional study was done in a Software Company of Sector V, Kolkata, the IT hub of West Bengal. Of all the employees, required sample size of 206 was selected by Simple Random Sampling. After proper permissions and consent, socio-demographic variables were collected by standardized instruments, musculoskeletal morbidity was collected by Nordic questionnaire, and ergonomic practices were obtained by checklists. Results: 90.78% of population showed some form of musculo skeletal symptoms. They were highest in fingers, elbows, wrist, shoulder, upper, while legs and lower back showed low morbidities. Increasing age, female sex, increasing years of work, repetition of work, poorer ergonomic scores all showed to have increased the symptoms. The regionwise ergonomic scores revealed how the poorer scores affected the musculo skeletal systems adversely. Several individual adverse ergonomic practices were also elicited. Discussion: The study goes hand to hand with many other studies throughout the world and also in India. However, a much higher morbidity has been found in this study probably due to a symptom based questionnaire. The adverse practices obtained here goes well with other relevant studies. Conclusion: This study puts occupational health problems of VIDEO DISPLAY TERMINAL users, and upholds the need of future multicentric cohort studies along with implementation of proper measures to ameliorate the effects of this occupational hazard. PMID
Marchessoux, Cédric; Rombaut, Alexis; Kimpe, Tom; Vermeulen, Brecht; Demeester, Piet
In the context of medical display validation, a simulation chain has been developed to facilitate display design and image quality validation. One important part is the human visual observer model to quantify the quality perception of the simulated images. Since several years, multiple research groups are modeling the various aspects of human perception to integrate them in a complete Human Visual System (HVS) and developing visible image difference metrics. In our framework, the JNDmetrix is used. It reflects the human subjective assessment of images or video fidelity. Nevertheless, the system is limited and not suitable for our accurate simulations. There is a limitation to RGB 8 bits integer images and the model takes into account display parameters like gamma, black offset, ambient light... It needs to be extended. The solutions proposed to extend the HVS model are: precision enhancement to overcome the 8 bit limit, color space conversion between XYZ and RGB and adaptation to the display parameters. The preprocessing does not introduce any kind of perceived distortion caused for example by precision enhancement. With this extension the model is used in a daily basis in the display simulation chain.
Robert C. Lorenz
Full Text Available Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual towards playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training.Fifty healthy participants were randomly assigned to a video game training (TG or control group (CG. Before and after training/control period, functional magnetic resonance imaging (fMRI was conducted using a non-video game related reward task.At pretest, both groups showed strongest activation in ventral striatum (VS during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated.This longitudinal study revealed that video game training may preserve reward responsiveness in the ventral striatum in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.
Lorenz, Robert C; Gleich, Tobias; Gallinat, Jürgen; Kühn, Simone
Video games contain elaborate reinforcement and reward schedules that have the potential to maximize motivation. Neuroimaging studies suggest that video games might have an influence on the reward system. However, it is not clear whether reward-related properties represent a precondition, which biases an individual toward playing video games, or if these changes are the result of playing video games. Therefore, we conducted a longitudinal study to explore reward-related functional predictors in relation to video gaming experience as well as functional changes in the brain in response to video game training. Fifty healthy participants were randomly assigned to a video game training (TG) or control group (CG). Before and after training/control period, functional magnetic resonance imaging (fMRI) was conducted using a non-video game related reward task. At pretest, both groups showed strongest activation in ventral striatum (VS) during reward anticipation. At posttest, the TG showed very similar VS activity compared to pretest. In the CG, the VS activity was significantly attenuated. This longitudinal study revealed that video game training may preserve reward responsiveness in the VS in a retest situation over time. We suggest that video games are able to keep striatal responses to reward flexible, a mechanism which might be of critical value for applications such as therapeutic cognitive training.
It is now over 20 years since Ferranti plc introduced optically projected map displays into operational aircraft navigation systems. Then, as now, it was the function of the display to present an image of a topographical map to a pilot or navigator with his present position clearly identified. Then, as now, the map image was projected from a reduced image stored on colour micro film. Then, as now, the fundamental design problems are the same.In the exposed environment of an aircraft cockpit where brightness levels may vary from those associated with direct sunlight on the one hand, to starlight on the other, how does one design an optical system with sufficient luminance, contrast and resolution where in the daytime sunlight may fall on the display or in the pilot's eyes, and at night time the display luminance must not detract from the pilot's ability to pick up external clues? This paper traces the development of Ferranti plc optically projected map displays from the early V Bomber and the ill-fated TSR2 displays to the Harrier and Concorde displays. It then goes on to the development of combined map and electronic displays (COMED), showing how an earlier design, as fitted to Tornado, has been developed into the current COMED design which is fitted to the F-18 and Jaguar aircraft. In each of the above display systems particular features of optical design interest are identified and their impact on the design as a whole are discussed. The use of prisms both for optical rotation and translation, techniques for the maximisation of luminance, the problems associated with contrast enhancement, particularly with polarising filters in the presence of optically active materials, the use of aerial image combining systems and the impact of the pilot interface on the system parameter are all included.Perhaps the most interesting result in considering the evolution of map displays has not been so much the designer's solutions in overcoming the various design problems but
Byrnes, Patrick D.; Higgins, William E.
Image-guided bronchoscopy is a critical component in the treatment of lung cancer and other pulmonary disorders. During bronchoscopy, a high-resolution endobronchial video stream facilitates guidance through the lungs and allows for visual inspection of a patient's airway mucosal surfaces. Despite the detailed information it contains, little effort has been made to incorporate recorded video into the clinical workflow. Follow-up procedures often required in cancer assessment or asthma treatment could significantly benefit from effectively parsed and summarized video. Tracking diagnostic regions of interest (ROIs) could potentially better equip physicians to detect early airway-wall cancer or improve asthma treatments, such as bronchial thermoplasty. To address this need, we have developed a system for the postoperative analysis of recorded endobronchial video. The system first parses an input video stream into endoscopic shots, derives motion information, and selects salient representative key frames. Next, a semi-automatic method for CT-video registration creates data linkages between a CT-derived airway-tree model and the input video. These data linkages then enable the construction of a CT-video chest model comprised of a bronchoscopy path history (BPH) - defining all airway locations visited during a procedure - and texture-mapping information for rendering registered video frames onto the airwaytree model. A suite of analysis tools is included to visualize and manipulate the extracted data. Video browsing and retrieval is facilitated through a video table of contents (TOC) and a search query interface. The system provides a variety of operational modes and additional functionality, including the ability to define regions of interest. We demonstrate the potential of our system using two human case study examples.
A desktop-scale, computer-controlled display system, initially developed for NASA and now known as the VolumeViewer(TradeMark), generates three-dimensional (3D) images of 3D objects in a display volume. This system differs fundamentally from stereoscopic and holographic display systems: The images generated by this system are truly 3D in that they can be viewed from almost any angle, without the aid of special eyeglasses. It is possible to walk around the system while gazing at its display volume to see a displayed object from a changing perspective, and multiple observers standing at different positions around the display can view the object simultaneously from their individual perspectives, as though the displayed object were a real 3D object. At the time of writing this article, only partial information on the design and principle of operation of the system was available. It is known that the system includes a high-speed, silicon-backplane, ferroelectric-liquid-crystal spatial light modulator (SLM), multiple high-power lasers for projecting images in multiple colors, a rotating helix that serves as a moving screen for displaying voxels [volume cells or volume elements, in analogy to pixels (picture cells or picture elements) in two-dimensional (2D) images], and a host computer. The rotating helix and its motor drive are the only moving parts. Under control by the host computer, a stream of 2D image patterns is generated on the SLM and projected through optics onto the surface of the rotating helix. The system utilizes a parallel pixel/voxel-addressing scheme: All the pixels of the 2D pattern on the SLM are addressed simultaneously by laser beams. This parallel addressing scheme overcomes the difficulty of achieving both high resolution and a high frame rate in a raster scanning or serial addressing scheme. It has been reported that the structure of the system is simple and easy to build, that the optical design and alignment are not difficult, and that the
Bapu, P. T.; Aulds, M. J.; Fuchs, Steven P.; McCormick, David M.
We have designed a pilot's harness-mounted, high voltage quick-disconnect connectors with 62 pins, to transmit voltages up to 13.5 kV and video signals with 70 MHz bandwidth, for a binocular helmet-mounted display system. It connects and disconnects with power off, and disconnects 'hot' without pilot intervention and without producing external sparks or exposing hot embers to the explosive cockpit environment. We have implemented a procedure in which the high voltage pins disconnect inside a hermetically-sealed unit before the physical separation of the connector. The 'hot' separation triggers a crowbar circuit in the high voltage power supplies for additional protection. Conductor locations and shields are designed to reduce capacitance in the circuit and avoid crosstalk among adjacent circuits. The quick- disconnect connector and wiring harness are human-engineered to ensure pilot safety and mobility. The connector backshell is equipped with two hybrid video amplifiers to improve the clarity of the video signals. Shielded wires and coaxial cables are molded as a multi-layered ribbon for maximum flexibility between the pilot's harness and helmet. Stiff cabling is provided between the quick-disconnect connector and the aircraft console to control behavior during seat ejection. The components of the system have been successfully tested for safety, performance, ergonomic considerations, and reliability.
Zhang, Zhengbing; Deng, Huiping; Xia, Zhenhua
Video systems have been widely used in many fields such as conferences, public security, military affairs and medical treatment. With the rapid development of FPGA, SOPC has been paid great attentions in the area of image and video processing in recent years. A network video transmission system based on SOPC is proposed in this paper for the purpose of video acquisition, video encoding and network transmission. The hardware platform utilized to design the system is an SOPC board of model Altera's DE2, which includes an FPGA chip of model EP2C35F672C6, an Ethernet controller and a video I/O interface. An IP core, known as Nios II embedded processor, is used as the CPU of the system. In addition, a hardware module for format conversion of video data, and another module to realize Motion-JPEG have been designed with Verilog HDL. These two modules are attached to the Nios II processor as peripheral equipments through the Avalon bus. Simulation results show that these two modules work as expected. Uclinux including TCP/IP protocol as well as the driver of Ethernet controller is chosen as the embedded operating system and an application program scheme is proposed.
Alley, P. L.; Smith, G. R.
The wind shears program (WISP) supports the collection of data on magnetic tape for permanent storage or analysis. The document structure provides: (1) the hardware and software configuration required to execute the WISP system and start up procedure from a power down condition; (2) data collection task, calculations performed on the incoming data, and a description of the magnetic tape format; (3) the data display task and examples of displays obtained from execution of the real time simulation program; and (4) the raw data dump task and examples of operator actions required to obtained the desired format. The procedures outlines herein will allow continuous data collection at the expense of real time visual displays.
Laptenok, V. D.; Seregin, Y. N.; Bocharov, A. N.; Murygin, A. V.; Tynchenko, V. S.
Equipment of video observation system for electron beam welding process was developed. Construction of video observation system allows to reduce negative effects on video camera during the process of electron beam welding and get qualitative images of this process.
Mantel, Claire; Søgaard, Jacob; Bech, Søren
This paper investigates the impact of ambient light and peak white (maximum brightness of a display) on the perceived quality of videos displayed using local backlight dimming. Two subjective tests providing quality evaluations are presented and analyzed. The analyses of variance show significant...... is computed using a model of the display. Widely used objective quality metrics are applied based on the rendering models of the videos to predict the subjective evaluations. As these predictions are not satisfying, three machine learning methods are applied: partial least square regression, elastic net...
Ishikawa, Tomoya; Yamazawa, Kazumasa; Sato, Tomokazu; Ikeda, Sei; Nakamura, Yutaka; Fujikawa, Kazutoshi; Sunahara, Hideki; Yokoya, Naokazu
In this paper, we describe a new telepresence system which enables a user to look around a virtualized real world easily in network environments. The proposed system includes omni-directional video viewers on web browsers and allows the user to look around the omni-directional video contents on the web browsers. The omni-directional video viewer is implemented as an Active-X program so that the user can install the viewer automatically only by opening the web site which contains the omni-directional video contents. The system allows many users at different sites to look around the scene just like an interactive TV using a multi-cast protocol without increasing the network traffic. This paper describes the implemented system and the experiments using live and stored video streams. In the experiment with stored video streams, the system uses an omni-directional multi-camera system for video capturing. We can look around high resolution and high quality video contents. In the experiment with live video streams, a car-mounted omni-directional camera acquires omni-directional video streams surrounding the car, running in an outdoor environment. The acquired video streams are transferred to the remote site through the wireless and wired network using multi-cast protocol. We can see the live video contents freely in arbitrary direction. In the both experiments, we have implemented a view-dependent presentation with a head-mounted display (HMD) and a gyro sensor for realizing more rich presence.
Lebowsky, Fritz; Nicolas, Marina
High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k by 2k and beyond. Subsequently, uncompressed pixel amplitude processing becomes costly not only when transmitting over cable or wireless communication channels, but also when processing with array processor architectures. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. We present a block-based memory compression architecture for text, graphics, and video enabling multidimensional error minimization with context sensitive control of visually noticeable artifacts. As a result of analyzing image context locally, the number of operations per pixel can be significantly reduced, especially when implemented on array processor architectures. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, identifies its current limitations with regard to high quality color rendering, and illustrates remaining visual artifacts.
Craven, Michael P.; Simons, Lucy; Gillott, Alinda; North, Steve; Schnädelbach, Holger; Young, Zoe
Networked Urban Screens offer new possibilities for public health education and awareness. An information video about Attention Deficit Hyperactivity Disorder (ADHD) was combined with a custom browser-based video game and successfully deployed on an existing research platform, Screens in the Wild (SitW). The SitW platform consists of 46-in. touchscreen or interactive displays, a camera, a microphone and a speaker, deployed at four urban locations in England. Details of the platform and softwa...
A short review is given on new display technologies such as plasma, liquid crystals, light emitting diodes, electroluminescence and electrochromism. It is stated that thin or thick film or hybrid techniques are essential for all the different types of display. Comparing the performance data of displays the advantages, disadvantages, appropriate applications and future developments are described. Finally the display market and its growth are discussed briefly.
Aidlen, Jeremy T; Glick, Sara; Silverman, Kenneth; Silverman, Harvey F; Luks, Francois I
Light-weight, low-profile, and high-resolution head-mounted displays (HMDs) now allow personalized viewing, of a laparoscopic image. The advantages include unobstructed viewing, regardless of position at the operating table, and the possibility to customize the image (i.e., enhanced reality, picture-in-picture, etc.). The bright image display allows use in daylight surroundings and the low profile of the HMD provides adequate peripheral vision. Theoretic disadvantages include reliance for all on the same image capture and anticues (i.e., reality disconnect) when the projected image remains static, despite changes in head position. This can lead to discomfort and even nausea. We have developed a prototype of interactive laparoscopic image display that allows hands-free control of the displayed image by changes in spatial orientation of the operator's head. The prototype consists of an HMD, a spatial orientation device, and computer software to enable hands-free panning and zooming of a video-endoscopic image display. The spatial orientation device uses magnetic fields created by a transmitter and receiver, each containing three orthogonal coils. The transmitter coils are efficiently driven, using USB power only, by a newly developed circuit, each at a unique frequency. The HMD-mounted receiver system links to a commercially available PC-interface PCI-bus sound card (M-Audiocard Delta 44; Avid Technology, Tewksbury, MA). Analog signals at the receiver are filtered, amplified, and converted to digital signals, which are processed to control the image display. The prototype uses a proprietary static fish-eye lens and software for the distortion-free reconstitution of any portion of the captured image. Left-right and up-down motions of the head (and HMD) produce real-time panning of the displayed image. Motion of the head toward, or away from, the transmitter causes real-time zooming in or out, respectively, of the displayed image. This prototype of the interactive HMD
Kasprowicz, Grzegorz; Pastuszak, Grzegorz; Poźniak, Krzysztof; Trochimiuk, Maciej; Abramowski, Andrzej; Gaska, Michal; Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Pawel; Jewartowski, Blazej; Frasunek, Przemysław; Nalbach-Moszynska, Małgorzata; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata
The purpose of the project is development of a platform which integrates video signals from many sources. The signals can be sourced by existing analogue CCTV surveillance installations, recent internet-protocol (IP) cameras or single cameras of any type. The system will consist of portable devices that provide conversion, encoding, transmission and archiving. The sharing subsystem will use distributed file system and also user console which provides simultaneous access to any of video streams in real time. The system is fully modular so its extension is possible, both from hardware and software side. Due to standard modular technology used, partial technology modernization is also possible during a long exploitation period.
Full Text Available Automated video object recognition is a topic of emerging importance in both defense and civilian applications. This work describes an accurate and low-power neuromorphic architecture and system for real-time automated video object recognition. Our system, Neuormorphic Visual Understanding of Scenes (NEOVUS, is inspired by recent findings in computational neuroscience on feed-forward object detection and classification pipelines for processing and extracting relevant information from visual data. The NEOVUS architecture is inspired by the ventral (what and dorsal (where streams of the mammalian visual pathway and combines retinal processing, form-based and motion-based object detection, and convolutional neural nets based object classification. Our system was evaluated by the Defense Advanced Research Projects Agency (DARPA under the NEOVISION2 program on a variety of urban area video datasets collected from both stationary and moving platforms. The datasets are challenging as they include a large number of targets in cluttered scenes with varying illumination and occlusion conditions. The NEOVUS system was also mapped to commercially available off-the-shelf hardware. The dynamic power requirement for the system that includes a 5.6Mpixel retinal camera processed by object detection and classification algorithms at 30 frames per second was measured at 21.7 Watts (W, for an effective energy consumption of 5.4 nanoJoules (nJ per bit of incoming video. In a systematic evaluation of five different teams by DARPA on three aerial datasets, the NEOVUS demonstrated the best performance with the highest recognition accuracy and at least three orders of magnitude lower energy consumption than two independent state of the art computer vision systems. These unprecedented results show that the NEOVUS has the potential to revolutionize automated video object recognition towards enabling practical low-power and mobile video processing applications.
Full Text Available Abstract Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, high-quality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo- and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.
Mantel, Claire; Korhonen, Jari; Forchhammer, Søren
In this paper the influence of ambient light and peak white (maximum brightness) of a display on the subjective quality of videos shown with local backlight dimming is examined. A subjective experiment investigating those factors is set-up using high contrast test sequences. The results are firstly...
Simcox, William A.
A comprehensive development process for display design, focusing on computer-generated cathode ray tube (CRT) displays is presented. A framework is created for breaking the display into its component parts, used to guide the design process. The objective is to design or select the most cost effective graphics solution (hardware and software) to…
Xia, Jiali; Jin, Jesse S.
Video-On-Demand is a new development on the Internet. In order to manage the rich multimedia information and the large number of users, we present an Internet Video-On-Demand system with some E- Commerce flavors. This paper presents the system architecture and technologies required in the implementation. It provides interactive Video-On-Demand services in which the user has a complete control over the session presentation. It allows the user to select and receive specific video information by retrieving the database. For improving the performance of video information retrieval and management, the video information is represented by hierarchical video metadata in XML format. Video metadatabase stored the video information in this hierarchical structure and allows user to search the video shots at different semantic levels in the database. To browse the searched video, the user not only has full-function VCR capabilities as the traditional Video-On-Demand, but also can browse the video in a hierarchical method to view different shots. In order to perform management of large number of users over the Internet, a membership database designed and managed in an E-Commerce environment, which allows the user to access the video database based on different access levels.
Mocci, F; Serra, A; Corrias, G A
To examine the part played by psychological factors in complaints about visual health reported by banking officers who work at video display terminals (VDTs). Out of a population of 385 bank workers, a group of 212 subjects without organic visual disturbances (as determined by ophthalmological examination) who share a work environment and job duties was selected. Three questionnaires were administered to these subjects: (a) the NIOSH job stress questionnaire; (b) a questionnaire investigating subjective discomfort related to environmental and lighting conditions of the workplace; (c) a questionnaire on the existence of oculovisual disturbances. Correlation and multiple regression analyses were performed to examine for the presence of predictors of asthenopia. Social support, group conflict, self esteem, work satisfaction, and underuse of skills were found to be predictors of visual complaints; social support played a part also as a moderating factor in the stress and strain model; this model accounted for 30% of the variance. Subjective environmental factors, although in some cases significantly correlated with asthenopia, were not found to be strong predictors of the symptoms. Some part of the complaints about visual health reported by VDT workers are likely indirect expressions of psychological discomfort related to working conditions.
In 1968, the display sysems group of the Systems Laboratory of the NASA/Electronics Research Center undertook a research task in the area of computer controlled flight information systems for aerospace application. The display laboratory of the Trans...
Kim, Ji Hyeon; Kim, Aram; Jo, Jung Hee; Kim, Ki Beom; Cheon, Sung Hyun; Cho, Joo Hyun; Sohn, Se Do; Baek, Seung Min [KEPCO, Youngin (Korea, Republic of)
The safety display of nuclear system has been classified as important to safety (SIL:Safety Integrity Level 3). These days the regulatory agencies are imposing more strict safety requirements for digital safety display system. To satisfy these requirements, it is necessary to develop a safety-critical (SIL 4) grade safety display system. This paper proposes industrial personal computer based safety display system with safety grade operating system and safety grade display methods. The description consists of three parts, the background, the safety requirements and the proposed safety display system design. The hardware platform is designed using commercially available off-the-shelf processor board with back plane bus. The operating system is customized for nuclear safety display application. The display unit is designed adopting two improvement features, i.e., one is to provide two separate processors for main computer and display device using serial communication, and the other is to use Digital Visual Interface between main computer and display device. In this case the main computer uses minimized graphic functions for safety display. The display design is at the conceptual phase, and there are several open areas to be concreted for a solid system. The main purpose of this paper is to describe and suggest a methodology to develop a safety-critical display system and the descriptions are focused on the safety requirement point of view.
Filliard, N.; Reymond, G.; Kemeny, A.; Berthoz, A.
Motion parallax is a crucial visual cue produced by translations of the observer for the perception of depth and selfmotion. Therefore, tracking the observer viewpoint has become inevitable in immersive virtual (VR) reality systems (cylindrical screens, CAVE, head mounted displays) used e.g. in automotive industry (style reviews, architecture design, ergonomics studies) or in scientific studies of visual perception. The perception of a stable and rigid world requires that this visual cue be coherent with other extra-retinal (e.g. vestibular, kinesthetic) cues signaling ego-motion. Although world stability is never questioned in real world, rendering head coupled viewpoint in VR can lead to the perception of an illusory perception of unstable environments, unless a non-unity scale factor is applied on recorded head movements. Besides, cylindrical screens are usually used with static observers due to image distortions when rendering image for viewpoints different from a sweet spot. We developed a technique to compensate in real-time these non-linear visual distortions, in an industrial VR setup, based on a cylindrical screen projection system. Additionally, to evaluate the amount of discrepancies tolerated without perceptual distortions between visual and extraretinal cues, a "motion parallax gain" between the velocity of the observer's head and that of the virtual camera was introduced in this system. The influence of this artificial gain was measured on the gait stability of free-standing participants. Results indicate that, below unity, gains significantly alter postural control. Conversely, the influence of higher gains remains limited, suggesting a certain tolerance of observers to these conditions. Parallax gain amplification is therefore proposed as a possible solution to provide a wider exploration of space to users of immersive virtual reality systems.
Gershkoff, I.; Haspert, J. K.; Morgenstern, B.
A cost model that can be used to systematically identify the costs of procuring and operating satellite linked communications systems is described. The user defines a network configuration by specifying the location of each participating site, the interconnection requirements, and the transmission paths available for the uplink (studio to satellite), downlink (satellite to audience), and voice talkback (between audience and studio) segments of the network. The model uses this information to calculate the least expensive signal distribution path for each participating site. Cost estimates are broken downy by capital, installation, lease, operations and maintenance. The design of the model permits flexibility in specifying network and cost structure.
Kim, Tae-Hun; Kang, Jung Won; Kim, Kun Hyung; Lee, Min Hee; Kim, Jung Eun; Kim, Joo-Hee; Lee, Seunghoon; Shin, Mi-Suk; Jung, So-Young; Kim, Ae-Ran; Park, Hyo-Ju; Hong, Kwon Eui
This was a randomized controlled pilot trial to evaluate the effectiveness of cupping therapy for neck pain in video display terminal (VDT) workers. Forty VDT workers with moderate to severe neck pain were recruited from May, 2011 to February, 2012. Participants were randomly allocated into one of the two interventions: 6 sessions of wet and dry cupping or heating pad application. The participants were offered an exercise program to perform during the participation period. A 0 to 100 numeric rating scale (NRS) for neck pain, measure yourself medical outcome profile 2 score (MYMOP2 score), cervical spine range of motion (C-spine ROM), neck disability index (NDI), the EuroQol health index (EQ-5D), short form stress response inventory (SRI-SF) and fatigue severity scale (FSS) were assessed at several points during a 7-week period. Compared with a heating pad, cupping was more effective in improving pain (adjusted NRS difference: -1.29 [95% CI -1.61, -0.97] at 3 weeks (p=0.025) and -1.16 [-1.48, -0.84] at 7 weeks (p=0.005)), neck function (adjusted NDI difference: -0.79 [-1.11, -0.47] at 3 (p=0.0039) and 7 weeks (pcupping and 0.91 [0.86, 0.91] with heating pad treatment, p=0.0054). Four participants reported mild adverse events of cupping. Two weeks of cupping therapy and an exercise program may be effective in reducing pain and improving neck function in VDT workers.
Infante, Cristian; Weitz, Juan; Reyes, Tomas; Nussbaum, Miguel; Gomez, Florencia; Radovic, Darinka
Role Game is a co-located CSCL video game played by three students sitting at one machine sharing a single screen, each with their own input device. Inspired by video console games, Role Game enables students to learn by doing, acquiring social abilities and mastering subject matter in a context of co-located collaboration. After describing the…
Yahya Rasoulzadeh; Reza Gholamnia
loskeletaldisorders (WMSDs) among the video display terminals (VDTs) users, Prevention ofthese disorders among this population is a challenge for many workplaces today. ErgonomicallyImproving of VDT workstations may be an effective and applicable way to decrease the risk ofWMSDs. This study evaluated the effect of an ergonomics-training program on the risk ofWMSDs among VDT users.Methods: This study was conducted among a large group of computer users in SAPCO industrialcompany, Tehran, Iran (...
Petkovic, M.; Jonker, Willem
An increasing number of large publicly available video libraries results in a demand for techniques that can manipulate the video data based on content. In this paper, we present a content-based video retrieval system called Cobra. The system supports automatic extraction and retrieval of high-level
Full Text Available In most sensitive occupations such as nuclear, military and chemical industries closed circuit systems and visual display terminals (VDTs are used to carefully control and assess sensitive processes. Visual fatigue is one of the factors decreasing accuracy and concentration in operators causing faulty perception. This study aimed to find out a relationship between visual fatigue symptoms (VFS of Flicker value variations in video display terminal (VDT operators. This cross-sectional study, conducted in 2011, aimed to examine visual fatigue and determine the relationship between its symptoms and visual flicker value changes in 248 operators of VDTs in several occupations. The materials used in this study were a visual fatigue questionnaire of VDTs and a VFM-90.1 device. Visual fatigue was measured in two stages (prior to beginning to work and 60 min later. The data were analyzed by SPSS11.5, using descriptive statistics, paired t-test, simple and multiple linear regressions, correlation and recognition coefficients. Then regression equations of changes in flicker value depending on the changes in the main domains and the changes in final score before the questionnaire were obtained. Paired t-test indicated significant differences in the mean score of visual fatigue symptoms and the mean score of flicker value between the two stages, respectively (P ≤ 0.001. Simple and multiple regressions of flicker value variations, for the last visual fatigue changes in questionnaire score and the four main domains of the questionnaire were obtained R2 = 0.851 and R2 = 0.853, respectively. Correlation coefficient in the above tests indicated reverse and significant relationships among flicker value changes with changes in questionnaire score and visual fatigue symptoms. Diagnosing the first symptoms of visual fatigue could be an appropriate warning for VDTs operators in sensitive occupations to react suitably, in behavior and management, to control or treat
Full Text Available It is well recognized that electromagnetic fields can affect the biological functions of living organisms at both cellular and molecular level. The potential damaging effects of electromagnetic fields and very low frequency and extremely low frequency radiation emitted by computer cathode ray tube video display monitors (VDMs has become a concern within the scientific community. We studied the effects of occupational exposure to VDMs in 10 males and 10 females occupationally exposed to VDMs and 20 unexposed control subjects matched for age and sex. Genetic damage was assessed by examining the frequency of micronuclei in exfoliated buccal cells and the frequency of other nuclear abnormalities such as binucleated and broken egg cells. Although there were no differences regarding binucleated cells between exposed and control individuals our analysis revealed a significantly higher frequency of micronuclei (p < 0.001 and broken egg cells (p < 0.05 in individuals exposed to VDMs as compared to unexposed. We also found that the differences between individuals exposed to VDMs were significantly related to the sex of the individuals and that there was an increase in skin, central nervous system and ocular disease in the exposed individuals. These preliminary results indicate that microcomputer workers exposed to VDMs are at risk of significant cytogenetic damage and should periodically undergo biological monitoring.
Still, David L. (Inventor); Temme, Leonard A. (Inventor)
A human centered informational display is disclosed that can be used with vehicles (e.g. aircraft) and in other operational environments where rapid human centered comprehension of an operational environment is required. The informational display integrates all cockpit information into a single display in such a way that the pilot can clearly understand with a glance, his or her spatial orientation, flight performance, engine status and power management issues, radio aids, and the location of other air traffic, runways, weather, and terrain features. With OZ the information is presented as an integrated whole, the pilot instantaneously recognizes flight path deviations, and is instinctively drawn to the corrective maneuvers. Our laboratory studies indicate that OZ transfers to the pilot all of the integrated display information in less than 200 milliseconds. The reacquisition of scan can be accomplished just as quickly. Thus, the time constants for forming a mental model are near instantaneous. The pilot's ability to keep up with rapidly changing and threatening environments is tremendously enhanced. OZ is most easily compatible with aircraft that has flight path information coded electronically. With the correct sensors (which are currently available) OZ can be installed in essentially all current aircraft.
Hsu, Chong Cheng; Yang, Chih Wei [Institute of Nuclear Energy Research, Atomic Energy Council, Taoyuan (China)
Current digital instrumentation and control and main control room (MCR) technology has extended the capability of integrating information from numerous plant systems and transmitting needed information to operations personnel in a timely manner that could not be envisioned when previous generation plants were designed and built. A MCR operator can complete all necessary operating actions on the video display unit (VDU). It is extremely flexible and convenient for operators to select and to control the system display on the screen. However, a high degree of digitalization has some risks. For example, in nuclear power plants, failures in the instrumentation and control devices could stop the operation of the plant. Human factors engineering (HFE) approaches would be a manner to solve this problem. Under HFE considerations, there exists 'population stereotype' for operation. That is, the operator is used to operating a specific display on the specific VDU for operation. Under emergency conditions, there is possibility that the operator will response with this habit population stereotype, and not be aware that the current situation has already changed. Accordingly, the advanced nuclear power plant should establish the MCR VDU configuration plan to meet the consistent teamwork goal under normal operation, transient and accident conditions. On the other hand, the advanced nuclear power plant should establish the human factors verification and validation plan of the MCR VDU configuration to verify and validate the configuration of the MCR VDUs, and to ensure that the MCR VDU configuration allows the operator shift to meet the HFE consideration and the consistent teamwork goal under normal operation, transient and accident conditions. This paper is one of the HF V V plans of the MCR VDU configuration of the advanced nuclear power plant. The purpose of this study is to confirm whether the VDU configuration meets the human factors principles and the consistent
Smithwick, Erica; Baxter, Emily; Kim, Kyung; Edel-Malizia, Stephanie; Rocco, Stevie; Blackstock, Dean
Two forms of interactive video were assessed in an online course focused on conservation. The hypothesis was that interactive video enhances student perceptions about learning and improves mental models of social-ecological systems. Results showed that students reported greater learning and attitudes toward the subject following interactive video.…
Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.
Polnau, D G; Ma, P M
Neuroethology seeks to uncover the neural mechanisms underlying natural behaviour. One of the major challenges in this field is the need to correlate directly neural activity and behavioural output. In most cases, recording of neural activity in freely moving animals is extremely difficult. However, electromyographic recording can often be used in lieu of neural recording to gain an understanding of the motor output program underlying a well-defined behaviour. Electromyographic recording is less invasive than most other recording methods, and does not impede the performance of most natural tasks. Using the opercular display of the Siamese fighting fish as a model, we developed a protocol for correlating directly electromyographic activity and kinematics of opercular movement: electromyographic activity was recorded in the audio channel of a video cassette recorder while video taping the display behaviour. By combining computer-assisted, quantitative video analysis and spike analysis, the kinematics of opercular movement are linked to the motor output program. Since the muscle that mediates opercular abduction in this fish, the dilator operculi, is a relatively small muscle with several subdivisions, we also describe methods for recording from small muscles and marking the precise recording site with electrolytic corrosion. The protocol described here is applicable to studies of a variety of natural behaviour that can be performed in a relatively confined space. It is also useful for analyzing complex or rapidly changing behaviour in which a precise correlation between kinematics and electromyography is required.
Doule, Ondrej; Miranda, David; Hochstadt, Jake
The Integrated Display and Environmental Awareness System (IDEAS) is an interdisciplinary team project focusing on the development of a wearable computer and Head Mounted Display (HMD) based on Commercial-Off-The-Shelf (COTS) components for the specific application and needs of NASA technicians, engineers and astronauts. Wearable computers are on the verge of utilization trials in daily life as well as industrial environments. The first civil and COTS wearable head mounted display systems were introduced just a few years ago and they probed not only technology readiness in terms of performance, endurance, miniaturization, operability and usefulness but also maturity of practice in perspective of a socio-technical context. Although the main technical hurdles such as mass and power were addressed as improvements on the technical side, the usefulness, practicality and social acceptance were often noted on the side of a broad variety of humans' operations. In other words, although the technology made a giant leap, its use and efficiency still looks for the sweet spot. The first IDEAS project started in January 2015 and was concluded in January 2017. The project identified current COTS systems' capability at minimum cost and maximum applicability and brought about important strategic concepts that will serve further IDEAS-like system development.
Full Text Available High-fidelity color image reproduction is one of the key issues invisual telecommunication systems, for electronic commerce,telemedicine, digital museum and so on. All colorimetric standards ofdisplay systems are up to the present day trichromatic. But, from theshape of a horseshoe-area of all existing colors in the CIE xychromaticity diagram it follows that with three real reproductivelights, the stated area in the CIE xy chromaticity diagram cannot beoverlaid. The expansion of the color gamut of a display device ispossible in a few ways. In this paper, the way of increasing the numberof primaries is studied. The fourth cyan primary is added to threeconventional ones to enlarge the color gamut of reproduction towardscyans and yellow-oranges. The original method of color management forthis new display unit is introduced. In addition, the color gamut ofthe designed additive-based display is successfully compared with thecolor gamut of a modern subtractive-based system. A display with morethan three primary colors is called a multiprimary color display. Thevery advantageous property of such display is the possibility todisplay metameric colors.
Urey, Hakan; Nestorovic, Ned; Ng, Baldwin S.; Gross, Abraham A.
The Virtual Retinal DisplayTM (VRDTM) technology is a new display technology being developed at Microvision Inc. The displayed image is scanned onto the viewer's retina using low- power red, green, and blue light sources. Microvision's proprietary miniaturized scanner designs make VRD system very well suited for head-mounted displays. In this paper we discuss some of the advantages of the VRD technology, various ocular designs for HMD and other applications, and details of constructing a system MTF budget for laser scanning systems that includes electronics, modulators, scanners, and optics.
Full Text Available This innovative glass design will carry an OLED based display controlled via nano Ardiuno board having Bluetooth connectivity with a Smartphone to exchange information along with onboard accelerometer. We are using a tilt angle sensor for detecting if the driver is feeling drowsy. An alcohol sensor has been used to promote the safe driving habit. The glasses will be getting latest updates about the current speed of the vehicle navigation directions nearby or approaching sign broads or services like petrol pumps. Itll also display information like incoming calls or received messages. All this information will be obtained through a Smartphone connected via Bluetooth. Also the car mileage can be monitored with help of fuel sensor as the consumption of fuel is directly related to it. Abnormalities if detected will be immediately notified in the glasses. Also the angle of the tilt angle sensor can be defined and set by the user according to his needs. Also the main idea of using OLED glasses is that it is organic thereby helps in reducing the carbon footprint and is quite slim. Therefore it can be easily mounted on the specs without making it heavy. Also they higher level of flexibility and have low power drain and energy consumption
Comprehensive guidelines are available for display design applications after the general system parameters have been specified. Some recommendations... display design (’cognitive’ functions being the most salient and critical of those remaining for the operator in advanced C3I systems). The principles...are derived from a review of the literatures on human cognition, HCI, and display design some original research, and liberal interpretation by the
Sakamoto, Kunio; Kanazawa, Fumihiro
An olfactory display is a device that delivers smells to the nose. It provides us with special effects, for example to emit smell as if you were there or to give a trigger for reminding us of memories. The authors have developed a tabletop display system connected with the olfactory display. For delivering a flavor to user's nose, the system needs to recognition and measure positions of user's face and nose. In this paper, the authors describe an olfactory display which enables to detect the nose position for an effective delivery.
Mckeeman, John C.; Remaklus, P. William
Electronic system acquires, controls processing of, and displays data from experiments on propagation of phase-coherent radio signals at frequencies of 12, 20, and 30 GHz. Acquisition equipment coordinates flow of data from multiple input channels to computer. Software provides for multi-tasking and for interactive graphical displays, including easy-to-use windows and pulldown menus with mouse input. Offers outstanding accuracy; acquires and displays data and controls associated equipment, all in real time.
Dabney, Richard W. (Inventor); Elrod, Susan Vinz (Inventor)
Systems, methods and apparatus are provided through which an apparatus located on an airfield provides information to pilots in aircraft on the ground and simultaneously gathers information on the motion and position of the aircraft for controllers.
Goetz, G. A.; Mandel, Y.; Manivanh, R.; Palanker, D. V.; Čižmár, T.
Objective. We present a holographic near-the-eye display system enabling optical approaches for sight restoration to the blind, such as photovoltaic retinal prosthesis, optogenetic and other photoactivation techniques. We compare it with conventional liquid crystal displays (LCD) or digital light processing (DLP)-based displays in terms of image quality, field of view, optical efficiency and safety. Approach. We detail the optical configuration of the holographic display system and its characterization using a phase-only spatial light modulator. Main results. We describe approaches to controlling the zero diffraction order and speckle related issues in holographic display systems and assess the image quality of such systems. We show that holographic techniques offer significant advantages in terms of peak irradiance and power efficiency, and enable designs that are inherently safer than LCD or DLP-based systems. We demonstrate the performance of our holographic display system in the assessment of cortical response to alternating gratings projected onto the retinas of rats. Significance. We address the issues associated with the design of high brightness, near-the-eye display systems and propose solutions to the efficiency and safety challenges with an optical design which could be miniaturized and mounted onto goggles.
Goetz, G A; Mandel, Y; Manivanh, R; Palanker, D V; Čižmár, T
We present a holographic near-the-eye display system enabling optical approaches for sight restoration to the blind, such as photovoltaic retinal prosthesis, optogenetic and other photoactivation techniques. We compare it with conventional liquid crystal displays (LCD) or digital light processing (DLP)-based displays in terms of image quality, field of view, optical efficiency and safety. We detail the optical configuration of the holographic display system and its characterization using a phase-only spatial light modulator. We describe approaches to controlling the zero diffraction order and speckle related issues in holographic display systems and assess the image quality of such systems. We show that holographic techniques offer significant advantages in terms of peak irradiance and power efficiency, and enable designs that are inherently safer than LCD or DLP-based systems. We demonstrate the performance of our holographic display system in the assessment of cortical response to alternating gratings projected onto the retinas of rats. We address the issues associated with the design of high brightness, near-the-eye display systems and propose solutions to the efficiency and safety challenges with an optical design which could be miniaturized and mounted onto goggles.
Bardram, Jakob Eyvind; Bossen, Claus; Lykke-Olesen, Andreas
Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...
Bardram, Jakob; Bossen, Claus; Lykke-Olesen, Andreas
Virtual studio technology enables the mixing of physical and digital 3D objects and thus expands the way of representing design ideas in terms of virtual video prototypes, which offers new possibilities for designers by combining elements of prototypes, mock-ups, scenarios, and conventional video....... In this article we report our initial experience in the domain of pervasive healthcare with producing virtual video prototypes and using them in a design workshop. Our experience has been predominantly favourable. The production of a virtual video prototype forces the designers to decide very concrete design...
General topics for consideration when designing expert display systems and nuclear power plant control room displays are summarized. A system is proposed in which the display of segments (a combined series of graphic primitives or a reusable collection of graphic primitives and primitives attributes stored in memory) controls a cathode-ray-tube's screen to form an image of plant operations. The image consists of an icon of: (1) the process (heat engine cycle), (2) plant control systems, and (3) safety systems. A set of data-driven, forward-chaining computer-stored rules control the display segments. As plant operation changes, measured plant data are processed through the rules, and the results control the deletion and addition of segments to the display format. The icon contains information needed by control rooms operators to monitor plant operations. One example of an expert display is illustrated for the operator's task of monitoring leakage from a safety valve in a steam line of a boiling water reactor (BWR). In another example, the use of an expert display to monitor plant operations during pre-trip, trip, and post-trip operations is discussed as a universal display.
Egan, J. T.; Macelroy, R. D.
A simple, microcomputer-based, interactive graphics display system has been developed for the presentation of perspective views of wire frame molecular models. The display system is based on a TERAK 8510a graphics computer system with a display unit consisting of microprocessor, television display and keyboard subsystems. The operating system includes a screen editor, file manager, PASCAL and BASIC compilers and command options for linking and executing programs. The graphics program, written in USCD PASCAL, involves the centering of the coordinate system, the transformation of centered model coordinates into homogeneous coordinates, the construction of a viewing transformation matrix to operate on the coordinates, clipping invisible points, perspective transformation and scaling to screen coordinates; commands available include ZOOM, ROTATE, RESET, and CHANGEVIEW. Data file structure was chosen to minimize the amount of disk storage space. Despite the inherent slowness of the system, its low cost and flexibility suggests general applicability.
Qiu, Jimmy; Hope, Andrew J.; Cho, B. C. John; Sharpe, Michael B.; Dickie, Colleen I.; DaCosta, Ralph S.; Jaffray, David A.; Weersink, Robert A.
We have developed a method to register and display 3D parametric data, in particular radiation dose, on two-dimensional endoscopic images. This registration of radiation dose to endoscopic or optical imaging may be valuable in assessment of normal tissue response to radiation, and visualization of radiated tissues in patients receiving post-radiation surgery. Electromagnetic sensors embedded in a flexible endoscope were used to track the position and orientation of the endoscope allowing registration of 2D endoscopic images to CT volumetric images and radiation doses planned with respect to these images. A surface was rendered from the CT image based on the air/tissue threshold, creating a virtual endoscopic view analogous to the real endoscopic view. Radiation dose at the surface or at known depth below the surface was assigned to each segment of the virtual surface. Dose could be displayed as either a colorwash on this surface or surface isodose lines. By assigning transparency levels to each surface segment based on dose or isoline location, the virtual dose display was overlaid onto the real endoscope image. Spatial accuracy of the dose display was tested using a cylindrical phantom with a treatment plan created for the phantom that matched dose levels with grid lines on the phantom surface. The accuracy of the dose display in these phantoms was 0.8-0.99 mm. To demonstrate clinical feasibility of this approach, the dose display was also tested on clinical data of a patient with laryngeal cancer treated with radiation therapy, with estimated display accuracy of ˜2-3 mm. The utility of the dose display for registration of radiation dose information to the surgical field was further demonstrated in a mock sarcoma case using a leg phantom. With direct overlay of radiation dose on endoscopic imaging, tissue toxicities and tumor response in endoluminal organs can be directly correlated with the actual tissue dose, offering a more nuanced assessment of normal tissue
Video Streaming is nowadays the Internet’s biggest source of consumer traffic. Traditional content providers rely on centralised client-server model for distributing their video streaming content. The current generation is moving from being passive viewers, or content consumers, to active content
M. U. M. Bakura
Full Text Available The technology of displaying message is an important part of communication and advertisement. In recent times, Wireless communication has announced its arrival on big stage and the world is going with Smartphone technology. This work describes the design and implementation of a microcontroller based messaging display system. The messaging display system will be interfaced with an android application which will then be used to display information from the comfort of one‘s phone to an LCD screen using the Bluetooth application interface. The work employs the use of an ATMEGA328p Microcontroller mounted on an Arduino board, a Bluetooth Module (HC-06 and an LCD screen. Most of these electronic display systems were using wired cable connections, the Bluetooth technology used in this work is aimed at solving the problem of wired cable connections.The microcontroller provides all the functionality of the display notices and wireless control. A desired text message from a mobile phone is sent via android mobile application to the Bluetooth module located at the receiving end. The Mobile Application was created using online software called App Inventor. When the entire system was connected and tested, it functioned as designed without any noticeable problems. The Bluetooth module responded to commands being sent from the android application appropriately and in a timely manner. The system was able to display 80 characters on the 4 x 20 LCD within the range of 10m as designated by the Bluetooth datasheet.
Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping
A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing.
Virk, Kamran; Li, Huiying; Forchhammer, Søren
This paper presents MPEG(2) decoder post-processing for high definition (HD) flat panel displays. The focus is to design efficient post-processing to reduce blocking and ringing artifacts. Standard deblocking modules are improved to obtain a significant load reduction through a new DCT based...
Smit, Ferdi Alexander; van Liere, Robert; Froehlich, Bernd
Display systems typically operate at a minimum rate of 60 Hz. However, existing VR-architectures generally produce application updates at a lower rate. Consequently, the display is not updated by the application every display frame. This causes a number of undesirable perceptual artifacts. We describe an architecture that provides a programmable display layer (PDL) in order to generate updated display frames. This replaces the default display behavior of repeating application frames until an update is available. We will show three benefits of the architecture typical to VR. First, smooth motion is provided by generating intermediate display frames by per-pixel depth-image warping using 3D motion fields. Smooth motion eliminates various perceptual artifacts due to judder. Second, we implement fine-grained latency reduction at the display frame level using a synchronized prediction of simulation objects and the viewpoint. This improves the average quality and consistency of latency reduction. Third, a crosstalk reduction algorithm for consecutive display frames is implemented, which improves the quality of stereoscopic images. To evaluate the architecture, we compare image quality and latency to that of a classic level-of-detail approach.
Raster i 0ly ~ Three of the companies surveyed )redlct~d tho near-terrm conversion from hybrid !,troke-raster displays to all-rastcr formrats. Once color WA...result in less reliance on skilled technicians and lower mean ’.ime to repair (MTTR). h. Lower Cost. Color display systems will decrease in cost td
Full Text Available This paper presents the accounting information system in public companies, business technology matrix and data flow diagram. The paper describes the purpose and goals of the accounting process, matrix sub-process and data class. Data flow in the accounting process and the so-called general ledger module are described in detail. Activities of the financial statements and determining the financial statements of the companies are mentioned as well. It is stated how the general ledger module should function and what characteristics it must have. Line graphs will depict indicators of the company’s business success, indebtedness and company’s efficiency coefficients based on financial balance reports, and profit and loss report.
National Aeronautics and Space Administration — In response to the NASA need for a free-standing immersive virtual scene display system interfaced with an exercise treadmill to mimic terrestrial exercise...
Ferwerda, James A
.... Our system, based on a desktop PC with GPU hardware, LCD display, light and position sensors, and custom graphics software, supports the photometrically accurate and visually realistic real-time...
Maund, D. H.
The preliminary design concept for the energy systems in the Advanced Technology Display House is analyzed. Residential energy demand, energy conservation, and energy concepts are included. Photovoltaic arrays and REDOX (reduction oxidation) sizes are discussed.
Bescos, Jesus; Martinez, Jose M.; Cabrera, Julian M.; Cisneros, Guillermo
This paper describes the first stages of a research project that is currently being developed in the Image Processing Group of the UPM. The aim of this effort is to add video capabilities to the Storage and Retrieval Information System already working at our premises. Here we will focus on the early design steps of a Video Information System. For this purpose, we present a review of most of the reported techniques for video temporal segmentation and semantic segmentation, previous steps to afford the content extraction task, and we discuss them to select the more suitable ones. We then outline a block design of a temporal segmentation module, and present guidelines to the design of the semantic segmentation one. All these operations trend to facilitate automation in the extraction of low level features and semantic features that will finally take part of the video descriptors.
Sakamoto, Kunio; Hosomi, Takashi
The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.
Bastide, M.; Youbicier-Simo, B.J.; Lebecq, J.C.; Giaimis, J. [Laboratoire d' Immunologie et Parasitologie, Faculte de Pharmacie, Universite de Montpellier, Montpellier (France); Youbicier-Simo, B.J. [Tecnolab, Chateau de l' Orbize, Dracy-le-Fort (France)
The effects of continuous exposure of chick embryos and young chickens to the electromagnetic fields (EMFs) emitted by video display units (VDUs) and GSM cell phone radiation, either the whole spectrum emitted or attenuated by a copper gauze, were investigated. Permanent exposure to the EMFs radiated by a VDU was associated with significantly increased fetal loss (47-68%) and markedly depressed levels of circulating specific antibodies (lgG), corticosterone and melatonin. We have also shown that under chronic exposure conditions, GSM cell phone radiation was harmful to chick embryos, stressful for healthy mice and, in this species, synergistic with cancer insofar as it depleted stress hormones. The same pathological results were observed after substantial reduction of the microwaves radiated from the cell phone by attenuating them with a copper gauze. (author)
Demure, B; Luippold, R S; Bigelow, C; Ali, D; Mundt, K A; Liese, B
Associations between selected sites of musculoskeletal discomfort and ergonomic characteristics of the video display terminal (VDT) workstation were assessed in analyses controlling for demographic, psychosocial stress, and VDT use factors in 273 VDT users from a large administrative department. Significant associations with wrist/hand discomfort were seen for female gender; working 7+ hours at a VDT; low job satisfaction; poor keyboard position; use of new, adjustable furniture; and layout of the workstation. Significantly increased odds ratios for neck/shoulder discomfort were observed for 7+ hours at a VDT, less than complete job control, older age (40 to 49 years), and never/infrequent breaks. Lower back discomfort was related marginally to working 7+ hours at a VDT. These results demonstrate that some characteristics of VDT workstations, after accounting for psychosocial stress, can be correlated with musculoskeletal discomfort.
The present invention relates to a binocular device (44) and a system (40) including a binocular device (44) configured for displaying one or more labels for an input device (2), such as a keyboard or a control panel, comprising a plurality of parts (4, 6) configured for activation and registration...... by depression. The binocular device (44) is configured for displaying a label of an activation part (4) as a three-dimensional label at the activation part (4)....
Full Text Available In order to support high-definition video transmission, an implementation of video transmission system based on Long Term Evolution is designed. This system is developed on Xilinx Virtex-6 FPGA ML605 Evaluation Board. The paper elaborates the features of baseband link designed in Xilinx ISE and protocol stack designed in Xilinx SDK, and introduces the process of setting up hardware and software platform in Xilinx XPS. According to test, this system consumes less hardware resource and is able to transmit bidirectional video clearly and stably.
Full Text Available The recent development of three dimensional (3D display technologies has resulted in a proliferation of 3D video production and broadcasting, attracting a lot of research into capture, compression and delivery of stereoscopic content. However, the predominant design practice of interactions with 3D video content has failed to address its differences and possibilities in comparison to the existing 2D video interactions. This paper presents a study of user requirements related to interaction with the stereoscopic 3D video. The study suggests that the change of view, zoom in/out, dynamic video browsing, and textual information are the most relevant interactions with stereoscopic 3D video. In addition, we identified a strong demand for object selection that resulted in a follow-up study of user preferences in 3D selection using virtual-hand and ray-casting metaphors. These results indicate that interaction modality affects users’ decision of object selection in terms of chosen location in 3D, while user attitudes do not have significant impact. Furthermore, the ray-casting-based interaction modality using Wiimote can outperform the volume-based interaction modality using mouse and keyboard for object positioning accuracy.
Brunner, M; Ittner, W
This paper describes VIPER, the video image-processing system Erlangen. It consists of a general purpose microcomputer, commercially available image-processing hardware modules connected directly to the computer, video input/output-modules such as a TV camera, video recorders and monitors, and a software package. The modular structure and the capabilities of this system are explained. The software is user-friendly, menu-driven and performs image acquisition, transfers, greyscale processing, arithmetics, logical operations, filtering display, colour assignment, graphics, and a couple of management functions. More than 100 image-processing functions are implemented. They are available either by typing a key or by a simple call to the function-subroutine library in application programs. Examples are supplied in the area of biomedical research, e.g. in in-vivo microscopy.
Sakamoto, Kunio; Kanazawa, Fumihiro
The authors have researched multimedia system and support system for nursing studies on and practices of reminiscence therapy and life review therapy. The concept of the life review is presented by Butler in 1963. The process of thinking back on one's life and communicating about one's life to another person is called life review. There is a famous episode concerning the memory. It is called as Proustian effects. This effect is mentioned on the Proust's novel as an episode that a story teller reminds his old memory when he dipped a madeleine in tea. So many scientists research why smells trigger the memory. The authors pay attention to the relation between smells and memory although the reason is not evident yet. Then we have tried to add an olfactory display to the multimedia system so that the smells become a trigger of reminding buried memories. An olfactory display is a device that delivers smells to the nose. It provides us with special effects, for example to emit smell as if you were there or to give a trigger for reminding us of memories. The authors have developed a tabletop display system connected with the olfactory display. For delivering a flavor to user's nose, the system needs to recognition and measure positions of user's face and nose. In this paper, the authors describe an olfactory display which enables to detect the nose position for an effective delivery.
Hunt, Ruston M.; Frey, Paul R.
The development and evaluation of the Knowledge Aided Display Design (KADD) system is described. Developed to investigate several designer support concepts in the context of the design of computer-generated displays, KADD's implementation uses technology from several disciplines of computer science including data base design and management, graphics, expert systems, and real-time simulation. This paper discusses KADD's goals and concepts, the implementation of the system, and the results of a two-part evaluation to determine the effectiveness of the KADD concepts.
Botella, Guillermo; García, Carlos; Meyer-Bäse, Uwe
This contribution focuses on different topics covered by the special issue titled `Hardware Implementation of Machine vision Systems' including FPGAs, GPUS, embedded systems, multicore implementations for image analysis such as edge detection, segmentation, pattern recognition and object recognition/interpretation, image enhancement/restoration, image/video compression, image similarity and retrieval, satellite image processing, medical image processing, motion estimation, neuromorphic and bioinspired vision systems, video processing, image formation and physics based vision, 3D processing/coding, scene understanding, and multimedia.
Ohta, Mitsuo; Ogawa, Hitoshi; Ikuta, Akira
A probabilistic signal processing method, with which is possible to get some methodological suggestion to the measurement method of correlative and/or accumulative effects in the compound environment of sound, light and electromagnetic (EM) waves is discussed. In order to extract various types of latent interrelation characteristics among wave environmental factors leaked from an actually operating video display terminal (VDT), an extended regression system model, hierarchically reflecting not only linear correlation information but also nonlinear correlation information, is first introduced, especially from a viewpoint of 'relationism-first'. Then, through estimating each regression parameter of this model, some original evaluation methods for predicting a whole probability distribution form, from one another, are proposed. Finally, the effectiveness of the methods is experimentally confirmed, by applying them to the actual observed data leaked by a VDT with some television games. To cite this article: M. Ohta et al., C. R. Mecanique 333 (2005).
New light steering projectors in cinema form images by moving light away from dark regions into bright areas of an image. In these systems, the peak luminance of small features can far exceed full screen white luminance. In traditional projectors where light is filtered or blocked in order to give shades of gray (or colors), the peak luminance is fixed. The luminance of chromatic features benefit in the same way as white features, and chromatic image details can be reproduced at high brightness leading to a much wider overall color gamut coverage than previously possible. Projectors of this capability are desired by the creative community to aid in and enhance storytelling. Furthermore, reduced light source power requirements of light steering projectors provide additional economic and environmental benefits. While the dependency of peak luminance level on (bright) image feature size is new in the digital cinema space, display technologies with identical characteristics such as OLED, LED LCD and Plasma TVs are well established in the home. Similarly, direct view LED walls are popular in events, advertising and architectural markets. To enable consistent color reproduction across devices in today’s content production pipelines, models that describe modern projectors and display attributes need to evolve together with HDR standards and available metadata. This paper is a first step towards rethinking legacy display descriptors such as contrast, peak luminance and color primaries in light of new display technology. We first summarize recent progress in the field of light steering projectors in cinema and then, based on new projector and existing display characteristics propose the inclusion of two simple display attributes: Maximum Average Luminance and Peak (Color) Primary Luminance. We show that the proposed attributes allow a better prediction of content reproducibility on HDR displays. To validate this assertion, we test professional content on a commercial HDR
Ferwerda, James A.
We are developing tangible display systems that enable natural interaction with virtual surfaces. Tangible display systems are based on modern mobile devices that incorporate electronic image displays, graphics hardware, tracking systems, and digital cameras. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of surfaces with complex textures and material properties illuminated by environment-mapped lighting, can be rendered to the screen at interactive rates. Tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. In this way, tangible displays allow virtual surfaces to be observed and manipulated as naturally as real ones, with the added benefit that surface geometry and material properties can be modified in real-time. We demonstrate the utility of tangible display systems in four application areas: material appearance research; computer-aided appearance design; enhanced access to digital library and museum collections; and new tools for digital artists.
Flat panel displays are conventionally cooled by internal natural convection, which constrains the possible rate of heat transfer from the panel. On one hand, during the last few years, the power consumption and the related cooling requirement for 1080p displays have decreased mostly due to energy savings by the switch to LED backlighting and more efficient electronics. However, on the other hand, the required cooling rate recently started to increase with new directions in the industry such as 3D displays, and ultra-high-resolution displays (recent 4K announcements and planned introduction of 8K). In addition to these trends in display technology itself, there is also a trend to integrate consumer entertainment products into displays with the ultimate goal of designing a multifunction device replacing the TV, the media player, the PC, the game console and the sound system. Considering the increasing power requirement for higher fidelity in video processing, these multifunction devices tend to generate very high heat fluxes, which are impossible to dissipate with internal natural convection. In order to overcome this obstacle, instead of active cooling with forced convection that comes with drawbacks of noise, additional power consumption, and reduced reliability, a passive cooling system relying on external natural convection and radiation is proposed here. The proposed cooling system consists of a heat spreader flat heat pipe and aluminum plate-finned heat sink with anodized surfaces. For this system, the possible maximum heat dissipation rates from the standard size panels (in 26-70 inch range) are estimated by using our recently obtained heat transfer correlations for the natural convection from aluminum plate-finned heat sinks together with the surface-to-surface radiation. With the use of the proposed passive cooling system, the possibility of dissipating very high heat rates is demonstrated, hinting a promising green alternative to active cooling.
Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous
van der Schaar-Mitrea, Mihaela; de With, Peter H. N.
The diversity in TV images has augmented with the increased application of computer graphics. In this paper we study z coding system that supports both the lossless coding of such graphics data and regular lossy video compression. The lossless coding techniques are based on runlength and arithmetical coding. For video compression, we introduce a simple block predictive coding technique featuring individual pixel access, so that it enables a gradual shift from lossless coding of graphics to the lossy coding of video. An overall bit rate control completes the system. Computer simulations show a very high quality with a compression factor between 2-3.
Slominski, Christopher J.; Parks, Mark A.; Debure, Kelly R.; Heaphy, William J.
The software created for the Control Display Units (CDUs), used for the Advanced Transport Operating Systems (ATOPS) project, on the Transport Systems Research Vehicle (TSRV) is described. Module descriptions are presented in a standardized format which contains module purpose, calling sequence, a detailed description, and global references. The global reference section includes subroutines, functions, and common variables referenced by a particular module. The CDUs, one for the pilot and one for the copilot, are used for flight management purposes. Operations performed with the CDU affects the aircraft's guidance, navigation, and display software.
Full Text Available Most universities are already implementing wired and wireless network that is used to access integrated information systems and the Internet. At present it is important to do research on the influence of the broadcasting system through the access point for video transmitter learning in the university area. At every university computer network through the access point must also use the cable in its implementation. These networks require cables that will connect and transmit data from one computer to another computer. While wireless networks of computers connected through radio waves. This research will be a test or assessment of how the influence of the network using the WLAN access point for video broadcasting means learning from the server to the client. Instructional video broadcasting from the server to the client via the access point will be used for video broadcasting means of learning. This study aims to understand how to build a wireless network by using an access point. It also builds a computer server as instructional videos supporting software that can be used for video server that will be emitted by broadcasting via the access point and establish a system of transmitting video from the server to the client via the access point.
Mohamed M. Fouad
Full Text Available In this paper, we present a modified inter-view prediction Multiview Video Coding (MVC scheme from the perspective of viewer's interactivity. When a viewer requests some view(s, our scheme leads to lower transmission bit-rate. We develop an interactive multiview video streaming system exploiting that modified MVC scheme. Conventional interactive multiview video systems require high bandwidth due to redundant data being transferred. With real data test sequences, clear improvements are shown using the proposed interactive multiview video system compared to competing ones in terms of the average transmission bit-rate and storage size of the decoded (i.e., transferred data with comparable rate-distortion.
Potel, Michael J.; MacKay, Steven A.; Sayre, Richard E.
Extracting quantitative information from movie film and video recordings has always been a difficult process. The Galatea motion analysis system represents an application of some powerful interactive computer graphics capabilities to this problem. A minicomputer is interfaced to a stop-motion projector, a data tablet, and real-time display equipment. An analyst views a film and uses the data tablet to track a moving position of interest. Simultaneously, a moving point is displayed in an animated computer graphics image that is synchronized with the film as it runs. Using a projection CRT and a series of mirrors, this image is superimposed on the film image on a large front screen. Thus, the graphics point lies on top of the point of interest in the film and moves with it at cine rates. All previously entered points can be displayed simultaneously in this way, which is extremely useful in checking the accuracy of the entries and in avoiding omission and duplication of points. Furthermore, the moving points can be connected into moving stick figures, so that such representations can be transcribed directly from film. There are many other tools in the system for entering outlines, measuring time intervals, and the like. The system is equivalent to "dynamic tracing paper" because it is used as though it were tracing paper that can keep up with running movie film. We have applied this system to a variety of problems in cell biology, cardiology, biomechanics, and anatomy. We have also extended the system using photogrammetric techniques to support entry of three-dimensional moving points from two (or more) films taken simultaneously from different perspective views. We are also presently constructing a second, lower-cost, microcomputer-based system for motion analysis in video, using digital graphics and video mixing to achieve the graphics overlay for any composite video source image.
... shall be “Open Video System Notice of Intent” and “Attention: Media Bureau.” This wording shall be... Notice of Intent with the Office of the Secretary and the Bureau Chief, Media Bureau. The Notice of... capacity through a fair, open and non-discriminatory process; the process must be insulated from any bias...
Full Text Available A novel video conference system is developed. Suppose that three people A, B, and C attend the video conference, the proposed system enables eye contact among every pair. Furthermore, when B and C chat, A feels as if B and C were facing each other (eye contact seems to be kept among B and C. In the case of a triangle video conference, the respective video system is composed of a half mirror, two video cameras, and two monitors. Each participant watches other participants' images that are reflected by the half mirror. Cameras are set behind the half mirror. Since participants' image (face and the camera position are adjusted to be the same direction, eye contact is kept and conversation becomes very natural compared with conventional video conference systems where participants' eyes do not point to the other participant. When 3 participants sit at the vertex of an equilateral triangle, eyes can be kept even for the situation mentioned above (eye contact between B and C from the aspect of A. Eye contact can be kept not only for 2 or 3 participants but also any number of participants as far as they sit at the vertex of a regular polygon.
Full Text Available This paper reports on the development of an automated embedded video surveillance system using two customized embedded RISC processors. The application is partitioned into object tracking and video stream encoding subsystems. The real-time object tracker is able to detect and track moving objects by video images of scenes taken by stationary cameras. It is based on the block-matching algorithm. The video stream encoding involves the optimization of an international telecommunications union (ITU-T H.263 baseline video encoder for quarter common intermediate format (QCIF and common intermediate format (CIF resolution images. The two subsystems running on two processor cores were integrated and a simple protocol was added to realize the automated video surveillance system. The experimental results show that the system is capable of detecting, tracking, and encoding QCIF and CIF resolution images with object movements in them in real-time. With low cycle-count, low-transistor count, and low-power consumption requirements, the system is ideal for deployment in remote locations.
Giroire, Frédéric; Huin, Nicolas
International audience; —We study distributed systems for live video streaming. These systems can be of two types: structured and un-structured. In an unstructured system, the diffusion is done opportunistically. The advantage is that it handles churn, that is the arrival and departure of users, which is very high in live streaming systems, in a smooth way. On the opposite, in a structured system, the diffusion of the video is done using explicit diffusion trees. The advantage is that the dif...
Al-Hamad, A.; Moussa, A.; El-Sheimy, N.
The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.
Full Text Available The synthesis of predictive display information and direct lift control system are considered for the path control tracking tasks (in particular landing task. The both solutions are based on pilot-vehicle system analysis and requirements to provide the highest accuracy and lowest pilot workload. The investigation was carried out for cases with and without time delay in aircraft dynamics. The efficiency of the both ways for the flying qualities improvement and their integration is tested by ground based simulation.
Korhonen, T; Ketola, R; Toivonen, R; Luukkonen, R; Häkkänen, M; Viikari-Juntura, E
To investigate work related and individual factors as predictors for incident neck pain among office employees working with video display units (VDUs). Employees in three administrative units of a medium sized city in Finland (n = 515) received mailed questionnaires in the baseline survey in 1998 and in the follow up survey in 1999. Response rate for the baseline was 81% (n = 416); respondents who reported neck pain for less than eight days during the preceding 12 months were included into the study cohort as healthy subjects (n = 232). The follow up questionnaire 12 months later was completed by 78% (n = 180). Incident neck cases were those reporting neck pain for at least eight days during the preceding 12 months. The annual incidence of neck pain was 34.4% (95% CI 25.5 to 41.3). Poor physical work environment and poor placement of the keyboard increased the risk of neck pain. Among the individual factors, female sex was a strong predictor. Smoking showed a tendency for an increased risk of neck pain. There was an interaction between mental stress and physical exercise, those with higher mental stress and less physical exercise having especially high risk. In the prevention of neck disorders in office work with a high frequency of VDU tasks, attention should be given to the work environment in general and to the more specific aspects of VDU workstation layout. Physical exercise may prevent neck disorders among sedentary employees.
Bernstein, Lynne E.; Jiang, Jintao; Pantazis, Dimitrios; Lu, Zhong-Lin; Joshi, Anand
The talking face affords multiple types of information. To isolate cortical sites with responsibility for integrating linguistically relevant visual speech cues, speech and non-speech face gestures were presented in natural video and point-light displays during fMRI scanning at 3.0T. Participants with normal hearing viewed the stimuli and also viewed localizers for the fusiform face area (FFA), the lateral occipital complex (LOC), and the visual motion (V5/MT) regions of interest (ROIs). The FFA, the LOC, and V5/MT were significantly less activated for speech relative to non-speech and control stimuli. Distinct activation of the posterior superior temporal sulcus and the adjacent middle temporal gyrus to speech, independent of media, was obtained in group analyses. Individual analyses showed that speech and non-speech stimuli were associated with adjacent but different activations, with the speech activations more anterior. We suggest that the speech activation area is the temporal visual speech area (TVSA), and that it can be localized with the combination of stimuli used in this study. PMID:20853377
Slominski, Christopher J.; Plyler, Valerie E.; Dickson, Richard W.
This document describes the software created for the Sperry Microprocessor Color Display System used for the Advanced Transport Operating Systems (ATOPS) project on the Transport Systems Research Vehicle (TSRV). The software delivery known as the 'baseline display system', is the one described in this document. Throughout this publication, module descriptions are presented in a standardized format which contains module purpose, calling sequence, detailed description, and global references. The global reference section includes procedures and common variables referenced by a particular module. The system described supports the Research Flight Deck (RFD) of the TSRV. The RFD contains eight cathode ray tubes (CRTs) which depict a Primary Flight Display, Navigation Display, System Warning Display, Takeoff Performance Monitoring System Display, and Engine Display.
Lee, Hyun; Um, Gi-Mun; Cheong, Won-Sik; Hur, Namho; Lee, Sung Jung; Kim, Changick
In this paper a new method for the autostereoscopic display, named the Dual Layer Parallax Barrier (DLPB) method, is introduced to overcome the limitation of the fixed viewing zone. Compared with the conventional parallax barrier methods, the proposed DLPB method uses moving parallax barriers to make the stereoscopic view changed according to the movement of viewer. In addition it provides seamless stereoscopic views without abrupt change of 3D depth feeling at any eye position. We implement a prototype of the DLPB system which consists of a switchable dual-layered Twisted Nematic Liquid Crystal Display (TN-LCD) and a head-tracker. The head tracker employs a video camera for capturing images, and is used to calculate the angle between the eye gazing direction and the projected direction onto the display plane. According to the head-tracker's control signal, the dual-layered TN-LCD is able to alternate the direction of viewing zone adaptively by a solid-state analog switch. The experimental results demonstrate that the proposed autostereoscopic display maintains seamless 3D views even when a viewer's head is moving. Moreover, its extended use towards mobile devices such as portable multimedia player (PMP), smartphone, and cellular phone is discussed as well.
Craciunescu, Razvan; Mihovska, Albena Dimitrova; Kyriazakos, Sofoklis
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current research focus includes on the emotion...... recognition from the face and hand gesture recognition. Gesture recognition enables humans to communicate with the machine and interact naturally without any mechanical devices. This paper investigates the possibility to use non-audio/video sensors in order to design a low-cost gesture recognition device...
Kapustin, A. A.; Razumovskii, V. N.; Iatsevich, G. B.
A spatial-spectral analysis method is considered for a laser scanning video system with the phase processing of a received signal, on a modulation frequency. Distortions caused by the system are analyzed, and a general problem is reduced for the case of a cylindrical surface. The approach suggested can also be used for scanning microwave systems.
Okano, Fumio; Kawakita, Masahiro; Arai, Jun; Sasaki, Hisayuki; Yamashita, Takayuki; Sato, Masahito; Suehiro, Koya; Haino, Yasuyuki
The integral method enables observers to see 3D images like real objects. It requires extremely high resolution for both capture and display stages. We present an experimental 3D television system based on the integral method using an extremely high-resolution video system. The video system has 4,000 scanning lines using the diagonal offset method for two green channels. The number of elemental lenses in the lens array is 140 (vertical) × 182 (horizontal). The viewing zone angle is wider than 20 degrees in practice. This television system can capture 3D objects and provides full color and full parallax 3D images in real time.
Slominski, Christopher J.; Plyler, Valerie E.; Dickson, Richard W.
This document describes the software created for the Display MicroVAX computer used for the Advanced Transport Operating Systems (ATOPS) project on the Transport Systems Research Vehicle (TSRV). The software delivery of February 27, 1991, known as the 'baseline display system', is the one described in this document. Throughout this publication, module descriptions are presented in a standardized format which contains module purpose, calling sequence, detailed description, and global references. The global references section includes subroutines, functions, and common variables referenced by a particular module. The system described supports the Research Flight Deck (RFD) of the TSRV. The RFD contains eight Cathode Ray Tubes (CRTs) which depict a Primary Flight Display, Navigation Display, System Warning Display, Takeoff Performance Monitoring System Display, and Engine Display.
Son, Han Seong [Joongbu University, Geumsan (Korea, Republic of); Kim, Hee Eun [KAIST, Daejeon (Korea, Republic of)
Cyber security has been a big issue since the instrumentation and control (I and C) system of nuclear power plant (NPP) is digitalized. A cyber-attack on NPP should be dealt with seriously because it might cause not only economic loss but also the radioactive material release. Researches on the consequences of cyber-attack onto NPP from a safety point of view have been conducted. A previous study shows the risk effect brought by initiation of event and deterioration of mitigation function by cyber terror. Although this study made conservative assumptions and simplifications, it gives an insight on the effect of cyber-attack. Another study shows that the error on a non-safety display system could cause wrong actions of operators. According to this previous study, the failure of the operator action caused by a cyber-attack on a display system might threaten the safety of the NPP by limiting appropriate mitigation actions. This study suggests a test strategy focusing on the cyber-attack on the information and display system, which might cause the failure of operator. The test strategy can be suggested to evaluate and complement security measures. Identifying whether a cyber-attack on the information and display system can affect the mitigation actions of operator, the strategy to obtain test scenarios is suggested. The failure of mitigation scenario is identified first. Then, for the test target in the scenario, software failure modes are applied to identify realistic failure scenarios. Testing should be performed for those scenarios to confirm the integrity of data and to assure effectiveness of security measures.
Trenchard, Michael E.; Lohrenz, Maura C.; Rosche, Henry, III; Wischow, Perry B.
The emergence of computerized mission planning systems (MPS) and airborne digital moving map systems (DMS) has necessitated the development of a global database of raster aeronautical chart data specifically designed for input to these systems. The Naval Oceanographic and Atmospheric Research Laboratory''s (NOARL) Map Data Formatting Facility (MDFF) is presently dedicated to supporting these avionic display systems with the development of the Compressed Aeronautical Chart (CAC) database on Compact Disk Read Only Memory (CDROM) optical discs. The MDFF is also developing a series of aircraft-specific Write-Once Read Many (WORM) optical discs. NOARL has initiated a comprehensive research program aimed at improving the pilots'' moving map displays current research efforts include the development of an alternate image compression technique and generation of a standard set of color palettes. The CAC database will provide digital aeronautical chart data in six different scales. CAC is derived from the Defense Mapping Agency''s (DMA) Equal Arc-second (ARC) Digitized Raster Graphics (ADRG) a series of scanned aeronautical charts. NOARL processes ADRG to tailor the chart image resolution to that of the DMS display while reducing storage requirements through image compression techniques. CAC is being distributed by DMA as a library of CDROMs.
... system operator may charge different rates to different classes of video programming providers, provided... open video systems. 76.1504 Section 76.1504 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76...
Yang, Jinn-Cherng; Wu, Chang-Shuo; Hsiao, Chuan-Heng; Yang, Ming-Chieh; Liu, Wen-Chieh; Hung, Yi-Ping
An autostereoscopic display provides users great enjoyment of stereo visualization without uncomfortable and inconvenient drawbacks of wearing stereo glasses. However, bandwidth constraints of current multi-view 3D display severely restrict the number of views that can be simultaneously displayed without degrading resolution or increasing display cost unacceptably. An alternative to multiple view presentation is that the position of observer can be measured by using viewer-tracking sensor. It is a very important module of the viewer-tracking component for fluently rendering and accurately projecting the stereo video. In order to render stereo content with respect to user's view points and to optically project the content onto the left and right eyes of the user accurately, the real-time viewer tracking technique that allows the user to move around freely when watching the autostereoscopic display is developed in this study. It comprises the face detection by using multiple eigenspaces of various lighting conditions, fast block matching for tracking four motion parameters of the user's face region. The Edge Orientation Histogram (EOH) on Real AdaBoost to improve the performance of original AdaBoost algorithm is also applied in this study. The AdaBoost algorithm using Haar feature in OpenCV library developed by Intel to detect human face and enhance the accuracy performance with rotating image. The frame rate of viewer tracking process can achieve up to 15 Hz. Since performance of the viewer tracking autostereoscopic display is still influenced under variant environmental conditions, the accuracy, robustness and efficiency of the viewer-tracking system are evaluated in this study.
tradeoff made early in the program was to incorporate a full frame bit map to serve as a scan converter between standard interlaced television and...absorb visible light photons , and in addition, we have been able to create a rounded surface dendrite type grain structure that greatly enhances the
Favalora, Gregg E.; Dorval, Rick K.; Hall, Deirdre M.; Giovinco, Michael; Napoli, Joshua
An 8-color multiplanar volumetric display is being developed by Actuality Systems, Inc. It will be capable of utilizing an image volume greater than 90 million voxels, which we believe is the greatest utilizable voxel set of any volumetric display constructed to date. The display is designed to be used for molecular visualization, mechanical CAD, e-commerce, entertainment, and medical imaging. As such, it contains a new graphics processing architecture, novel high-performance line- drawing algorithms, and an API similar to a current standard. Three-dimensional imagery is created by projecting a series of 2-D bitmaps ('image slices') onto a diffuse screen that rotates at 600 rpm. Persistence of vision fuses the slices into a volume-filling 3-D image. A modified three-panel Texas Instruments projector provides slices at approximately 4 kHz, resulting in 8-color 3-D imagery comprised of roughly 200 radially-disposed slices which are updated at 20 Hz. Each slice has a resolution of 768 by 768 pixels, subtending 10 inches. An unusual off-axis projection scheme incorporating tilted rotating optics is used to maintain good focus across the projection screen. The display electronics includes a custom rasterization architecture which converts the user's 3- D geometry data into image slices, as well as 6 Gbits of DDR SDRAM graphics memory.
Griffith, Lawrence L.; Reidinger, Michael J.; Feigles, Edward M.
TRU-LYTE Systems, Inc. is developing an HDTV display that will exceed displays in the large screen display (LSD) market. Due to the present design and manufacturing techniques of LCDs, ELs, and CRTs there are limitations with LSD applications. One of the possible solutions is a hybrid of fiber optic technology and transmissive active matrix LCDs. In this design, multiple LCD modules are coupled with an equal number of fiber optic modules. These modules are designed so that strands of fiber optics are placed in a coherent manner from a rear panel to a predetermined spaced front panel. An image projected onto the rear panel will result in an enlarged image being displayed on the front panel. Imageboard modules would then be manufactured using this design of the building block method. The determining factors would include the desired output intensity, size restrictions, and cost factors. Research has also developed a technology that allows for consistent wide-angle viewing of the image displa'ed by the optical fibers. Applications for this product range from HDTV to stadium scoreboards.
Song, Joongseok; Kim, Changseob; Park, Hanhoon; Park, Jong-Il
We propose a practical system that can effectively mix the depth data of real and virtual objects by using a Z buffer and can quickly generate digital mixed reality video holograms by using multiple graphic processing units (GPUs). In an experiment, we verify that real objects and virtual objects can be merged naturally in free viewing angles, and the occlusion problem is well handled. Furthermore, we demonstrate that the proposed system can generate mixed reality video holograms at 7.6 frames per second. Finally, the system performance is objectively verified by users' subjective evaluations.
The material presented in this paper is based on two studies involving the design of visual displays and the user`s prospective model of a system. The studies involve a methodology known as Neuro-Linguistic Programming and its use in expanding design choices from the operator`s perspective image. The contents of this paper focuses on the studies and how they are applicable to the safety of operating reactors.
Qian, Huifa; Zhang, Quanzhu; Deng, Yonghong
With the continuous development of sensor manufacturing technology, how to better deal with the signal is particularly important. PH value of the sensor voltage generated by the signal as a signal, through the MCU acquisition A / D conversion, and ultimately through the digital display of its PH value. The system uses hardware and software to achieve the results obtained with the high-precision PH meter to strive to improve the accuracy and reduce error.
An on-line terminal oriented data storage and retrieval system is presented which allows a user to extract and process information from stored data bases. The use of on-line terminals for extracting and displaying data from the data bases provides a fast and responsive method for obtaining needed information. The system consists of general purpose computer programs that provide the overall capabilities of the total system. The system can process any number of data files via a Dictionary (one for each file) which describes the data format to the system. New files may be added to the system at any time, and reprogramming is not required. Illustrations of the system are shown, and sample inquiries and responses are given.
Chen, Chien-Hsu; Chou, Yin-Ju
This study focuses on development of augmented video system on traditional picture postcards. The system will provide users to print out the augmented reality marker on the sticker to stick on the picture postcard, and it also allows users to record their real time image and video to augment on that stick marker. According dynamic image, users can share travel moods, greeting, and travel experience to their friends. Without changing in the traditional picture postcards, we develop augmented video system on them by augmented reality (AR) technology. It not only keeps the functions of traditional picture postcards, but also enhances user's experience to keep the user's memories and emotional expression by augmented digital media information on them.
Full Text Available Future wireless video transmission systems will consider orthogonal frequency division multiplexing (OFDM as the basic modulation technique due to its robustness and low complexity implementation in the presence of frequency-selective channels. Recently, adaptive bit loading techniques have been applied to OFDM showing good performance gains in cable transmission systems. In this paper a multilayer bit loading technique, based on the so called Ã‚Â“ordered subcarrier selection algorithm,Ã‚Â” is proposed and applied to a Hiperlan2-like wireless system at 5 GHz for efficient layered multimedia transmission. Different schemes realizing unequal error protection both at coding and modulation levels are compared. The strong impact of this technique in terms of video quality is evaluated for MPEG-4 video transmission.
Rothkrantz, L.; Lefter, I.
The paper describes a surveillance system of cameras installed at lamppost of a military area. The surveillance system has been designed to detect unwanted visitors or suspicious behaviors. The area is composed of streets, building blocks and surrounded by gates and water. The video recordings are
Rasoulzadeh, Yahya; Gholamnia, Reza
According to the findings of several studies conducted on work-related musculoskeletal disorders (WMSDs) among the video display terminals (VDTs) users, Prevention of these disorders among this population is a challenge for many workplaces today. Ergonomically Improving of VDT workstations may be an effective and applicable way to decrease the risk of WMSDs. This study evaluated the effect of an ergonomics-training program on the risk of WMSDs among VDT users. This study was conducted among a large group of computer users in SAPCO industrial company, Tehran, Iran (84 persons with 29.85±11.2 years of age and with 6.98±2.54 years of experience). An active ergonomics-training program was designed and implemented during 14 days to empower the VDT users and involve them in improving their workstations. The direct observational RULA (Rapid Upper Limb Assessment) method was used in pre and post-intervention stages to evaluate the risk of WMSDs among participants. The RULA final scores showed that 18.8 % of VDT users were at action level 2, 63.5% at action level 3 and 17.6% at action level 4 before any intervention. In addition, 8.2% of users were at action level 1, 44.7% at action level 2, 42.4% at action level 3 and 4.7% at action level 4 at the post-intervention stage. The results of Wilcoxon statistical test indicated that RULA scores ere decreased significantly after interventions (P < 0.05) and consequently, decreased risk of WMSDs. Active ergonomics training programs can be used effectively to improve the VDT workstations and decrease the risk of musculoskeletal disorders among VDT users.
Full Text Available loskeletaldisorders (WMSDs among the video display terminals (VDTs users, Prevention ofthese disorders among this population is a challenge for many workplaces today. ErgonomicallyImproving of VDT workstations may be an effective and applicable way to decrease the risk ofWMSDs. This study evaluated the effect of an ergonomics-training program on the risk ofWMSDs among VDT users.Methods: This study was conducted among a large group of computer users in SAPCO industrialcompany, Tehran, Iran (84 persons with 29.85±11.2 years of age and with 6.98±2.54 years ofexperience. An active ergonomics-training program was designed and implemented during 14days to empower the VDT users and involve them in improving their workstations. The directobservational RULA (Rapid Upper Limb Assessment method was used in pre and postinterventionstages to evaluate the risk of WMSDs among participants.Results: The RULA final scores showed that 18.8 % of VDT users were at action level 2, 63.5%at action level 3 and 17.6% at action level 4 before any intervention. In addition, 8.2% of userswere at action level 1, 44.7% at action level 2, 42.4% at action level 3 and 4.7% at action level 4 atthe post-intervention stage. The results of Wilcoxon statistical test indicated that RULA scoresere decreased significantly after interventions (P < 0.05 and consequently, decreased risk ofWMSDs.Conclusion: Active ergonomics training programs can be used effectively to improve the VDTworkstations and decrease the risk of musculoskeletal disorders among VDT users.
Desjardins, Daniel D.; Meyer, Frederick
The military display market is analyzed in terms of four of its segments: avionics, vetronics, dismounted soldier, and command and control. Requirements are summarized for a number of technology-driving parameters, to include luminance, night vision imaging system compatibility, gray levels, resolution, dimming range, viewing angle, video capability, altitude, temperature, shock and vibration, etc., for direct-view and virtual-view displays in cockpits and crew stations. Technical specifications are discussed for selected programs.
Full Text Available This paper proposes a camera road sign system of the early warning, which can help to avoid from vehicle collision with the wild animals. The system consists of camera modules placed down the particularly chosen route and the intelligent road signs. The camera module consists of the camera device and the computing unit. The video stream is captured from video camera using computing unit. Then the algorithms of object detection are deployed. Afterwards, the machine learning algorithms will be used to classify the moving objects. If the moving object is classified as animal and this animal can be dangerous for safety of the vehicle, warning will be displayed on the intelligent road sings.
Millis, Marc G.
The features, design, calibration, and testing of Lewis Research Center's acceleration display system for aircraft zero-gravity research are described. Specific circuit schematics and system specifications are included as well as representative data traces from flown trajectories. Other observations learned from developing and using this system are mentioned where appropriate. The system, now a permanent part of the Lewis Learjet zero-gravity program, provides legible, concise, and necessary guidance information enabling pilots to routinely fly accurate zero-gravity trajectories. Regular use of this system resulted in improvements of the Learjet zero-gravity flight techniques, including a technique to minimize later accelerations. Lewis Gates Learjet trajectory data show that accelerations can be reliably sustained within 0.01 g for 5 consecutive seconds, within 0.02 g for 7 consecutive seconds, and within 0.04 g for up to 20 second. Lewis followed the past practices of acceleration measurement, yet focussed on the acceleration displays. Refinements based on flight experience included evolving the ranges, resolutions, and frequency responses to fit the pilot and the Learjet responses.
Ecliptic Enterprises Corporation, headquartered in Pasadena, California, provided onboard video systems for rocket and space shuttle launches before it was tasked by Ames Research Center to craft the Data Handling Unit that would control sensor instruments onboard the Lunar Crater Observation and Sensing Satellite (LCROSS) spacecraft. The technological capabilities the company acquired on this project, as well as those gained developing a high-speed video system for monitoring the parachute deployments for the Orion Pad Abort Test Program at Dryden Flight Research Center, have enabled the company to offer high-speed and high-definition video for geosynchronous satellites and commercial space missions, providing remarkable footage that both informs engineers and inspires the imagination of the general public.
Stolitzka, Dale F.; Schelkens, Peter; Bruylants, Tim
Visually lossless image coding in isochronous display streaming or plesiochronous networks reduces link complexity and power consumption and increases available link bandwidth. A new set of codecs developed within the last four years promise a new level of coding quality, but require new techniques that are sufficiently sensitive to the small artifacts or color variations induced by this new breed of codecs. This paper begins with a summary of the new ISO/IEC 29170-2, a procedure for evaluation of lossless coding and reports the new work by JPEG to extend the procedure in two important ways, for HDR content and for evaluating the differences between still images, panning images and image sequences. ISO/IEC 29170-2 relies on processing test images through a well-defined process chain for subjective, forced-choice psychophysical experiments. The procedure sets an acceptable quality level equal to one just noticeable difference. Traditional image and video coding evaluation techniques, such as, those used for television evaluation have not proven sufficiently sensitive to the small artifacts that may be induced by this breed of codecs. In 2015, JPEG received new requirements to expand evaluation of visually lossless coding for high dynamic range images, slowly moving images, i.e., panning, and image sequences. These requirements are the basis for new amendments of the ISO/IEC 29170-2 procedures described in this paper. These amendments promise to be highly useful for the new content in television and cinema mezzanine networks. The amendments passed the final ballot in April 2017 and are on track to be published in 2018.
Ning, Li; Ruilan, Zhang; Jian, Liu; Xiaochen, Wang; Shuying, Chen; Zhuolin, Lang
This article introduces a kind of system which is based on the AVR to acquire the data of ECG. Such system using the A/D function of ATmega8 chip and the lattice graph LCD to design ECG heart acquisition satisfies the demands above. This design gives a composition of hardware and programming of software about the system in detail which has mainly realized the real-time gathering, the amplifier, the filter, the A/D transformation and the LCD display. Since the AVR includes A/D transformation function and support embedded C language programming, it reduces the peripheral circuit, further more it also decreases the time to design and debug this system.
Jones, D. P.; Shirey, D. L.; Amai, W. A.
This paper presents a high bandwidth fiber-optic communication system intended for post accident recovery of weapons. The system provides bi-directional multichannel, and multi-media communications. Two smaller systems that were developed as direct spin-offs of the larger system are also briefly discussed.
Jones, D.P.; Shirey, D.L.; Amai, W.A.
This paper presents a high bandwidth fiber-optic communication system intended for post accident recovery of weapons. The system provides bi-directional multichannel, and multi-media communications. Two smaller systems that were developed as direct spin-offs of the larger system are also briefly discussed.
... COMMISSION In the Matter of Certain Video Analytics Software, Systems, Components Thereof, and Products... analytics software, systems, components thereof, and products containing same by reason of infringement of... after importation of certain video analytics software, systems, components thereof, and products...
... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... analytics software, systems, components thereof, and products containing same by reason of infringement of... after importation of certain video analytics software, systems, components thereof, and products...
... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... Trade Commission has received a complaint entitled Certain Video Analytics Software, Systems, Components... analytics software, systems, components thereof, and products containing same. The complaint names as...
He, Ding; Xu, Tong; Wang, Yongtian; Hua, Hong; Hu, Ying
A head-mounted-display system for virtual reality is developed, which is mainly comprised of a pair of viewing lenses together with LCDs to provide the stereoscopic image, and a tracking device to detect the motion of the head. Each viewing lens contains 4 optical elements, and can give a 120 degree(s) field of view for each eye when used with a 2.2' LCD. The tracking device consists of a 3-axis fluxgate magnetometer and a pendulum, which determines the orientation angles of the helmet. Another version of the tracking device capable of measuring 6 degrees of freedom movement of the helmet is currently under development.
Stolte, Chris; Hanrahan, Patrick
Systems and methods for displaying data in split dimension levels are disclosed. In some implementations, a method includes: at a computer, obtaining a dimensional hierarchy associated with a dataset, wherein the dimensional hierarchy includes at least one dimension and a sub-dimension of the at least one dimension; and populating information representing data included in the dataset into a visual table having a first axis and a second axis, wherein the first axis corresponds to the at least one dimension and the second axis corresponds to the sub-dimension of the at least one dimension.
Gramss, Denise; Struve, Doreen
The study reported in this paper investigated the usefulness of different instructions for guiding inexperienced older adults through interactive systems. It was designed to compare different media in relation to their social as well as their motivational impact on the elderly during the learning process. Precisely, the video was compared with…
Glazkov, V. D.; Goretov, Iu. M.; Rozhavskii, E. I.; Shcherbakov, V. V.
The self-correcting video section of the satellite-borne Fragment multispectral scanning system is described. This section scheme makes possible a sufficiently efficient equalization of the transformation coefficients of all the measuring sections in the presence of a reference-radiation source and a single reference time interval for all the sections.
Full Text Available In this work we introduce a simple client-server system architecture and algorithms for ubiquitous live video and VOD service support. The main features of the system are: efficient usage of network resources, emphasis on user personalization, and ease of implementation. The system supports many continuous service requirements such as QoS provision, user mobility between networks and between different communication devices, and simultaneous usage of a device by a number of users.
Full Text Available An object-based video authentication system, which combines watermarking, error correction coding (ECC, and digital signature techniques, is presented for protecting the authenticity between video objects and their associated backgrounds. In this system, a set of angular radial transformation (ART coefficients is selected as the feature to represent the video object and the background, respectively. ECC and cryptographic hashing are applied to those selected coefficients to generate the robust authentication watermark. This content-based, semifragile watermark is then embedded into the objects frame by frame before MPEG4 coding. In watermark embedding and extraction, groups of discrete Fourier transform (DFT coefficients are randomly selected, and their energy relationships are employed to hide and extract the watermark. The experimental results demonstrate that our system is robust to MPEG4 compression, object segmentation errors, and some common object-based video processing such as object translation, rotation, and scaling while securely preventing malicious object modifications. The proposed solution can be further incorporated into public key infrastructure (PKI.
Su, Runyu; Nie, Boyao; Yuan, Shengling; Tao, Haoxia; Liu, Chunjie; Yang, Bailiang; Wang, Yanchun
To describe a novel particles surface display system which is consisted of gram-positive enhancer matrix (GEM) particles and anchor proteins for bacteria-like particles vaccines, we treated Lactobacillus rhamnosus GG bacteria with 10% heated-TCA for preparing GEM particles, and then identified the harvested GEM particles by electron microscopy, RT-PCR and SDS-PAGE. Meanwhile, Escherichia coli was induced to express hybrid proteins PA3-EGFP and P60-EGFP, and GEM particles were incubated with them. Then binding of anchor proteins were determined by Western blotting, transmission electron microscopy, fluorescence microscopy and spectrofluorometry. GEM particles preserved original size and shape, and proteins and DNA contents of GEM particles were released substantially. The two anchor proteins both had efficiently immobilized on the surface of GEM. GEM particles that were bounded by anchor proteins were brushy. The fluorescence of GEM particles anchoring PA3 was slightly brighter than P60, but the difference was not significant (P>0.05). GEM particles prepared from L. rhamnosus GG have a good binding efficiency with anchor proteins PA3-EGFP and P60-EGFP. Therefore, this novel foreign protein surface display system could be used for bacteria-like particle vaccines.
Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian
Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.
Kong, Hyoun-Joong; Seo, Jong Mo; Hwang, Jeong Min; Kim, Hee Chan
Binocular indirect ophthalmoscope (BIO) provides a wider view of fundus with stereopsis contrary to the direct one. Proposed system is composed of portable BIO and 3D viewing unit. The illumination unit of BIO utilized high flux LED as a light source, LED condensing lens cap for beam focusing, color filters and small lithium ion battery. In optics unit of BIO, beam splitter was used to distribute an examinee's fundus image both to examiner's eye and to CMOS camera module attached to device. Captured retinal video stream data from stereo camera modules were sent to PC through USB 2.0 connectivity. For 3D viewing, two video streams having parallax between them were aligned vertically and horizontally and made into side-by-side video stream for cross-eyed stereoscopy. And the data were converted into autostereoscopic video stream using vertical interlacing for stereoscopic LCD which has glass 3D filter attached to the front side of it. Our newly devised system presented the real-time 3-D view of fundus to assistants with less dizziness than cross-eyed stereoscopy. And the BIO showed good performance compared to conventional portable BIO (Spectra Plus, Keeler Limited, Windsor, UK).
Chow, John W.; Carlton, Les G.; Ekkekakis, Panteleimon; Hay, James G.
Discusses advantages of a video-based, digitized image system for the study and analysis of projectile motion in the physics laboratory. Describes the implementation of a web-based digitized video system. (WRM)
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... the United States after importation of certain video analytics software systems, components thereof...
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Investigations: Terminations, Modifications and Rulings: Certain Video Game Systems and... United States after importation of certain video game systems and controllers by reason of infringement...
Mikulec, Martin; Voznak, Miroslav; Safarik, Jakub; Partila, Pavol; Rozhon, Jan; Mehic, Miralem
The paper deals with presentation of the IVAS system within the 7FP EU INDECT project. The INDECT project aims at developing the tools for enhancing the security of citizens and protecting the confidentiality of recorded and stored information. It is a part of the Seventh Framework Programme of European Union. We participate in INDECT portal and the Interactive Video Audio System (IVAS). This IVAS system provides a communication gateway between police officers working in dispatching centre and police officers in terrain. The officers in dispatching centre have capabilities to obtain information about all online police officers in terrain, they can command officers in terrain via text messages, voice or video calls and they are able to manage multimedia files from CCTV cameras or other sources, which can be interesting for officers in terrain. The police officers in terrain are equipped by smartphones or tablets. Besides common communication, they can reach pictures or videos sent by commander in office and they can respond to the command via text or multimedia messages taken by their devices. Our IVAS system is unique because we are developing it according to the special requirements from the Police of the Czech Republic. The IVAS communication system is designed to use modern Voice over Internet Protocol (VoIP) services. The whole solution is based on open source software including linux and android operating systems. The technical details of our solution are presented in the paper.
A HTTP based video transmission system has been built upon the p2p(peer to peer) network structure utilizing the Java technologies. This makes the video monitoring available to any host which has been connected to the World Wide Web in any method, including those hosts behind firewalls or in isolated sub-networking. In order to achieve this, a video source peer has been developed, together with the client video playback peer. The video source peer can respond to the video stream request in HTTP protocol. HTTP based pipe communication model is developed to speeding the transmission of video stream data, which has been encoded into fragments using the JPEG codec. To make the system feasible in conveying video streams between arbitrary peers on the web, a HTTP protocol based relay peer is implemented as well. This video monitoring system has been applied in a tele-robotic system as a visual feedback to the operator.
Shi, Yu; Yan, Guozheng; Zhu, Bingquan; Liu, Gang
Wireless power transmission (WPT) technology can solve the energy shortage problem of the video capsule endoscope (VCE) powered by button batteries, but the fixed platform limited its clinical application. This paper presents a portable WPT system for VCE. Besides portability, power transfer efficiency and stability are considered as the main indexes of optimization design of the system, which consists of the transmitting coil structure, portable control box, operating frequency, magnetic core and winding of receiving coil. Upon the above principles, the correlation parameters are measured, compared and chosen. Finally, through experiments on the platform, the methods are tested and evaluated. In the gastrointestinal tract of small pig, the VCE is supplied with sufficient energy by the WPT system, and the energy conversion efficiency is 2.8%. The video obtained is clear with a resolution of 320×240 and a frame rate of 30 frames per second. The experiments verify the feasibility of design scheme, and further improvement direction is discussed.
Berka, Martin; Żyliński, Marek; Niewiadomski, Wiktor; Cybulski, Gerard
This paper presents the design of a compact, wearable, rechargeable acceleration recorder to support long-term monitoring of ambulatory patients with motor disorders, and of software to display and analyze its output. The device consists of a microcontroller, operational amplifier, accelerometer, SD card, indicator LED, rechargeable battery, and associated minor components. It can operate for over a day without charging and can continuously collect data for three weeks without downloading to an outside system, as currently configured. With slight modifications, this period could be extended to several months. The accompanying software provides flexible visualization of the acceleration data over long periods, basic file operations and compression for easier archiving, annotation of segments of interest, and functions for calculation of various parameters and detection of immobility and vibration frequencies. Applications in analysis of gait and other movements are discussed.
... COMMISSION In the Matter of: Certain Video Game Systems and Controllers; Notice of Investigation AGENCY: U.S... importation, and the sale within the United States after importation of certain video game systems and... after importation of certain video game systems and controllers that infringe one or more of claims 16...
Cormier, Etienne; Cao, Frédéric; Guichard, Frédéric; Viard, Clément
This article presents a system and a protocol to characterize image stabilization systems both for still images and videos. It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel accuracy to determine a homographic deformation between the current frame and a reference position. This model describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as DSC, DSLR and smartphones.
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Multimedia Display and Navigation Devices and Systems, Components Thereof, and Products... multimedia display and navigation devices and systems, components thereof, and products containing same by...
Sun, Jun; Liang, Mingxing; Chen, Weijun; Zhang, Bin
In order to reinforce the measure of vegetable shed's safety, the S3C44B0X is taken as the main processor chip. The embedded hardware platform is built with a few outer-ring chips, and the network server is structured under the Linux embedded environment, and MPEG4 compression and real time transmission are carried on. The experiment indicates that the video monitoring system can guarantee good effect, which can be applied to the safety of vegetable sheds.
Langbehn, Hendrickson Reiter; Ricci, Saulo M. R.; Gonçalves, Marcos A.; Almeida, Jussara Marques; Pappa, Gisele Lobo; Benevenuto, Fabrício
Most online video sharing systems (OVSSs), such as YouTube and Yahoo! Video, have several mechanisms for supporting interactions among users. One such mechanism is the video response feature in YouTube, which allows a user to post a video in response to another video. While increasingly popular, the video response feature opens the opportunity for non-cooperative users to introduce ``content pollution'' into the system, thus causing loss of service effectiveness and credibility as w...
Fu, Chang-Hong; Chan, Yui-Lam; Ip, Tak-Piu; Siu, Wan-Chi
MPEG digital video is becoming ubiquitous for video storage and communications. It is often desirable to perform various video cassette recording (VCR) functions such as backward playback in MPEG videos. However, the predictive processing techniques employed in MPEG severely complicate the backward-play operation. A straightforward implementation of backward playback is to transmit and decode the whole group-of-picture (GOP), store all the decoded frames in the decoder buffer, and play the decoded frames in reverse order. This approach requires a significant buffer in the decoder, which depends on the GOP size, to store the decoded frames. This approach could not be possible in a severely constrained memory requirement. Another alternative is to decode the GOP up to the current frame to be displayed, and then go back to decode the GOP again up to the next frame to be displayed. This approach does not need the huge buffer, but requires much higher bandwidth of the network and complexity of the decoder. In this paper, we propose a macroblock-based algorithm for an efficient implementation of the MPEG video streaming system to provide backward playback over a network with the minimal requirements on the network bandwidth and the decoder complexity. The proposed algorithm classifies macroblocks in the requested frame into backward macroblocks (BMBs) and forward/backward macroblocks (FBMBs). Two macroblock-based techniques are used to manipulate different types of macroblocks in the compressed domain and the server then sends the processed macroblocks to the client machine. For BMBs, a VLC-domain technique is adopted to reduce the number of macroblocks that need to be decoded by the decoder and the number of bits that need to be sent over the network in the backward-play operation. We then propose a newly mixed VLC/DCT-domain technique to handle FBMBs in order to further reduce the computational complexity of the decoder. With these compressed-domain techniques, the
Hanjalic, Alan; Ceccarelli, Marco; Lagendijk, Reginald L.; Biemond, Jan
In the European project SMASH mass-market storage systems for domestic use are under study. Besides the storage technology that is developed in this project, the related objective of user-friendly browsing/query of video data is studied as well. Key issues in developing a user-friendly system are (1) minimizing the user-intervention in preparatory steps (extraction and storage of representative information needed for browsing/query), (2) providing an acceptable representation of the stored video content in view of a higher automation level, (3) the possibility for performing these steps directly on the incoming stream at storage time, and (4) parameter-robustness of algorithms used for these steps. This paper proposes and validate novel approaches for automation of mentioned preparatory phases. A detection method for abrupt shot changes is proposed, using locally computed threshold based on a statistical model for frame-to-frame differences. For the extraction of representative frames (key frames) an approach is presented which distributes a given number of key frames over the sequence depending on content changes in a temporal segment of the sequence. A multimedia database is introduced, able to automatically store all bibliographic information about a recorded video as well as a visual representation of the content without any manual intervention from the user.
Edwards, Tim; Barnhart, Warren; Sawyer, Kevin; Aiken, Edwin W. (Technical Monitor)
A high performance color helmet mounted display (HMD) system for in-flight simulation and research has been developed for the Rotorcraft Aircrew Systems Concepts Laboratory (RASCAL). The display system consists of a programmable display generator, a display electronics unit, a head tracker, and the helmet with display optics. The system provides a maximum of 1024 x 1280 resolution, a 4:1 contrast ratio, and a brightness of 1100fL utilizing currently available technologies. This paper describes the major features and components of the system. Also discussed are the measured performance of the system and the design techniques that allowed the development of a full color HMD.
Full Text Available 3D experience and free-viewpoint navigation are expected to be two essential features of next generation television. In this paper, we present a flexible 3DTV system in which multiview video streams are captured, compressed, transmitted, and finally converted to high-quality 3D video in real time. Our system consists of an 8ÃƒÂ—8 camera array, 16 producer PCs, a streaming server, multiple clients, and several autostereoscopic displays. The whole system is implemented over IP network to provide multiple users with interactive 2D/3D switching, viewpoint control, and synthesis for dynamic scenes. In our approach, multiple video streams are first captured by a synchronized camera array. Then, we adopt a lengthened-B-field and region of interest- (ROI- based coding scheme to guarantee a seamless view switching for each user as well as saving per-user transmission bandwidth. Finally, a convenient rendering algorithm is used to synthesize a visually pleasing result by introducing a new metric called Clarity Degree (CD. Experiments on both synthetic and real-world data have verified the feasibility, flexibility, and good performance of our system.
Qin, Damin; Takamatsu, Mamoru; Nakashima, Yoshio
In three-dimensional display systems, binocular disparities must be limited within a certain fusional area, called as “Panum's fusional area". Otherwise, too larger disparity could cause double view or serious eye fatigue. However, the measurements about Panum's fusional area in the previous studies focused only on the horizontal and vertical meridian of retina. For fully measuring the Panum's fusional area in more directions, we took the measurements of the limits of Panum's fusional area, in sixteen different directions from 0 degree to 360 degrees by a step of 22.5 degrees in the fovea. It was found that the horizontal disparity limit of binocular fusional area is about 32-38.4 min of arc and the vertical limit is about 19.2-24 min of arc. The disparity limits of binocular fusional area are approximately symmetrical about horizontal meridian. However, the disparity limits are not symmetrical about vertical meridian; the nasalward disparity limits are obviously larger than temporalward disparity limits. Moreover, in the nasal side of retina, the disparity limits decrease in a monotonic fashion, and in the temporal side, however, the disparity limits have no obvious difference.
V. M. Sorokin
Full Text Available Creation of new liquid crystal displays and their adaptation for different external environments are impossible without correct diagnosing of wide range of electro-optical effects inherent to nematic, smectic and cholesteric liquid crystal materials. The modern universal display measuring complexes allow to solve this problem. Among different display measuring complexes those are wide used in the world for scientific centers and enterprises in Russia, Belorussia and Ukraine the complex CM-100, which has been developed in Institute of Semiconductor Phisics of NAS of Ukraine, is the most suitable.
Li, Yucheng; Han, Dantao; Yan, Juanli
A wireless video surveillance system based on ARM was designed and implemented in this article. The newest ARM11 S3C6410 was used as the main monitoring terminal chip with the embedded Linux operating system. The video input was obtained by the analog CCD and transferred from analog to digital by the video chip TVP5150. The video was packed by RTP and transmitted by the wireless USB TL-WN322G+ after being compressed by H.264 encoders in S3C6410. Further more, the video images were preprocessed. It can detect the abnormities of the specified scene and the abnormal alarms. The video transmission definition is the standard definition 480P. The video stream can be real-time monitored. The system has been used in the real-time intelligent video surveillance of the specified scene.
Arai, Jun; Okui, Makoto; Yamashita, Takayuki; Okano, Fumio
We have developed an integral three-dimensional (3-D) television that uses a 2000-scanning-line video system that can shoot and display 3-D color moving images in real time. We had previously developed an integral 3-D television that used a high-definition television system. The new system uses ˜6 times as many elemental images [160 (horizontal)×118 (vertical) elemental images] arranged at ˜1.5 times the density to improve further the picture quality of the reconstructed image. Through comparison an image near the lens array can be reconstructed at ˜1.9 times the spatial frequency, and the viewing angle is ˜1.5 times as wide.
Scheifinger, Helfried; Koch, Elisabeth
Some national weather services operate a special web interface, where citizens can enter their phenological observations. Such a system opens the opportunity to immediately display the current state of the seasonal vegetation development. Here a few simple tools are introduced to evaluate and display near real time phenological observations with respect to the interannual variability and trends over the last decades. For many phenological phases continuous time series since 1946 are available in Austria, which is a time period sufficiently long to study the climate impact on phenology. Phenological observations can be entered in near real time via the ZAMG web - portal or be digitised after the season from the observer sheets with a considerable time lag. About 30% to 50% of the total phenological data stem from the near real time system, which can be used for near real time monitoring of the phenological season. The minimum number of observations, which must be available for inclusion in the procedure has arbitrarily been set to 12 in order to allow a reasonable height regression. The system installed at the ZAMG produces a nightly update of the statistical analysis and figures, which can then for instance be summarised for a news release. At the moment there is no spatial differentiation possible. All conclusions and figures are based on phenological entry dates over all Austrian observations, which have been standardised to an arbitrary station elevation of 200 m above sea level via height regression. The 2012 Austrian phenological season in relation to 1946 - 2011 The cold period from end of January to beginning of February 2012 in Austria has also left its marks on the phenological season. The early phases like beginning of flowering of snow drop, hazel or willow are to be found in the median position of rank 32 of 67 years since 1946. The remainder of the 2012 season generally shows rather early entry dates. On average the phenological entry dates range at
Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.
Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.
Bower, Matt; Cavanagh, Michael; Moloney, Robyn; Dao, MingMing
This paper reports on how the cognitive, behavioural and affective communication competencies of undergraduate students were developed using an online Video Reflection system. Pre-service teachers were provided with communication scenarios and asked to record short videos of one another making presentations. Students then uploaded their videos to…
Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng
As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.
Fuchs, Henry; Pizer, Stephen M.; Heinz, E. Ralph; Bloomberg, Sandra H.; Tsai, Li-Ching; Strickland, Dorothy C.
We are developing graphics systems, image preprocessing methods, and interactive manipulation techniques for a space-filling 3D display using a varifocal mirror principle. Our driving problem is a medical imaging need for presentation of three-dimensional intensity information. The major goal of both the image preprocessing and the interactive manipulation has been to overcome obscuration, which we feel is coming to be recognized as the central problem in any space-filling display. In our system, the preprocessing step highlights important image features such as surfaces. At display time, the object can be dynamically edited and rotated for convenient viewing from various directions. Our particular hardware design allows the 3D display to be constructed as an inexpensive add-on to a standard video graphics system. The interactive rotation and other manipulations are achieved by the standard built-in graphics processor.
Li, Hejian; An, Ping; Zhang, Zhaoyang
Three-dimensional (3-D) video brings people strong visual perspective experience, but also introduces large data and complexity processing problems. The depth estimation algorithm is especially complex and it is an obstacle for real-time system implementation. Meanwhile, high-resolution depth maps are necessary to provide a good image quality on autostereoscopic displays which deliver stereo content without the need for 3-D glasses. This paper presents a hardware implementation of a full high-definition (HD) depth estimation system that is capable of processing full HD resolution images with a maximum processing speed of 125 fps and a disparity search range of 240 pixels. The proposed field-programmable gate array (FPGA)-based architecture implements a fusion strategy matching algorithm for efficiency design. The system performs with high efficiency and stability by using a full pipeline design, multiresolution processing, synchronizers which avoid clock domain crossing problems, efficient memory management, etc. The implementation can be included in the video systems for live 3-D television applications and can be used as an independent hardware module in low-power integrated applications.
AO-AIO 790 BOM CORP MCLEAN VA F/A 17/8 VIDEO AUTOMATIC TARGE T TRACKING SYSTEM (VATTS) OPERATING PROCEO -ETC(U) AUG Go C STAMM J P ORRESTER, J...Tape Transport Number Two TKI Tektronics I/0 Terminal DS1 Removable Disk Storage Unit DSO Fixed Disk Storage Unit CRT Cathode Ray Tube 1-3 THE BDM...file (mark on Mag Tape) AZEL Quick look at Trial Information Program DUPTAPE Allows for duplication of magnetic tapes CA Cancel ( terminates program on
Archetti, Renata; Vacchi, Matteo; Carniel, Sandro; Benetazzo, Alvise
Measuring the location of the shoreline and monitoring foreshore changes through time represent a fundamental task for correct coastal management at many sites around the world. Several authors demonstrated video systems to be an essential tool for increasing the amount of data available for coastline management. These systems typically sample at least once per hour and can provide long-term datasets showing variations over days, events, months, seasons and years. In the past few years, due to the wide diffusion of video cameras at relatively low price, the use of video cameras and of video images analysis for environmental control has increased significantly. Even if video monitoring systems were often used in the research field they are most often applied with practical purposes including: i) identification and quantification of shoreline erosion, ii) assessment of coastal protection structure and/or beach nourishment performance, and iii) basic input to engineering design in the coastal zone iv) support for integrated numerical model validation Here we present the guidelines for the creation of a new video monitoring network in the proximity of the Jesolo beach (NW of the Adriatic Sea, Italy), Within this 10 km-long tourist district several engineering structures have been built in recent years, with the aim of solving urgent local erosion problems; as a result, almost all types of protection structures are present at this site: groynes, detached breakwaters.The area investigated experienced severe problems of coastal erosion in the past decades, inclusding a major one in the last November 2012. The activity is planned within the framework of the RITMARE project, that is also including other monitoring and scientific activities (bathymetry survey, waves and currents measurements, hydrodynamics and morphodynamic modeling). This contribution focuses on best practices to be adopted in the creation of the video monitoring system, and briefly describes the
Yang, Xiao-Song; Jiang, Zheng-Bing; Song, Hui-Ting; Jiang, Si-Jing; Madzak, Catherine; Ma, Li-Xin
A novel surface-display system was constructed using the cell-wall anchor protein Flo1p from Saccharomyces cerevisiae, the mannanase (man1) from Bacillus subtilis fused with the C-terminus of Flo1p and the 6xHis tag was inserted between Flo1p and man1. The fusion protein was displayed on the cell surface of Yarrowia lipolytica successfully, and it was confirmed by immunofluorescence. In succession, the surface-displayed mannanase was characterized. The optimum catalytic conditions for the recombinant mannanase were 55 degrees C at pH 6.0, and it exhibited high stability against pH variation. The highest activity of the recombinant mannanase reached 62.3 IU/g (dry cell weight) after the recombinant was cultivated for 96 h in YPD medium [1% (w/v) yeast extract/2% (w/v) peptone/2% (w/v) glucose]. To our knowledge, the present paper is the first to report that high-activity mannanase is displayed on the cell surface of Y. lipolytica with Flo1p.
Li, Xue; Xu, Hui-juan; Qin, Ling-ling; Zheng, Long-jiang
The LED display incorporate the micro electronic technique, computer technology and information processing as a whole, it becomes the most preponderant of a new generation of display media with the advantages of bright in color, high dynamic range, high brightness and long operating life, etc. The LED display has been widely used in the bank, securities trading, highway signs, airport and advertising, etc. According to the display color, the LED display screen is divided into monochrome screen, double color display and full color display. With the diversification of the LED display's color and the ceaseless rise of the display demands, the LED display's drive circuit and control technology also get the corresponding progress and development. The earliest monochrome screen just displaying Chinese characters, simple character or digital, so the requirements of the controller are relatively low. With the widely used of the double color LED display, the performance of its controller will also increase. In recent years, the full color LED display with three primary colors of red, green, blue and grayscale display effect has been highly attention with its rich and colorful display effect. Every true color pixel includes three son pixels of red, green, blue, using the space colour mixture to realize the multicolor. The dynamic scanning control system of LED full-color display is designed based on MSP430 microcontroller technology of the low power consumption. The gray control technology of this system used the new method of pulse width modulation (PWM) and 19 games show principle are combining. This method in meet 256 level grayscale display conditions, improves the efficiency of the LED light device, and enhances the administrative levels feels of the image. Drive circuit used 1/8 scanning constant current drive mode, and make full use of the single chip microcomputer I/O mouth resources to complete the control. The system supports text, pictures display of 256 grayscale
Full Text Available We investigate the video assignment problem of a hierarchical Video-on-Demand (VOD system in heterogeneous environments where different quality levels of videos can be encoded using either replication or layering. In such systems, videos are delivered to clients either through a proxy server or video broadcast/unicast channels. The objective of our work is to determine the appropriate coding strategy as well as the suitable delivery mechanism for a specific quality level of a video such that the overall system blocking probability is minimized. In order to find a near-optimal solution for such a complex video assignment problem, an evolutionary approach based on genetic algorithm (GA is proposed. From the results, it is shown that the system performance can be significantly enhanced by efficiently coupling the various techniques.
White, Preston, III
Kennedy Space Center has the need for economical transmission of two multiplexed video signals along multimode fiberoptic systems. These systems must span unusual distances and must meet RS-250B short-haul standards after reception. Bandwidth is a major problem and studies of the installed fibers, available LEDs and PINFETs led to the choice of 100 MHz as the upper limit for the system bandwidth. Optical multiplexing and digital transmission were deemed inappropriate. Three electrical multiplexing schemes were chosen for further study. Each of the multiplexing schemes included an FM stage to help meet the stringent S/N specification. Both FM and AM frequency division multiplexing methods were investigated theoretically and these results were validated with laboratory tests. The novel application of quadrature amplitude multiplexing was also considered. Frequency division multiplexing of two wideband FM video signal appears the most promising scheme although this application requires high power highly linear LED transmitters. Futher studies are necessary to determine if LEDs of appropriate quality exist and to better quantify performance of QAM in this application.
OFarrell, Zachary L.
Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.
Brown, Michael A.
The current Johnson Space Center (JSC) Mission Control Center (MCC) Video Transport System (VTS) provides flight controllers and management the ability to meld raw video from various sources with telemetry to improve situational awareness. However, maintaining a separate infrastructure for video delivery and integration of video content with data adds significant complexity and cost to the system. When considering alternative architectures for a VTS, the current system's ability to share specific computer displays in their entirety to other locations, such as large projector systems, flight control rooms, and back supporting rooms throughout the facilities and centers must be incorporated into any new architecture. Internet Protocol (IP)-based systems also support video delivery and integration. IP-based systems generally have an advantage in terms of cost and maintainability. Although IP-based systems are versatile, the task of sharing a computer display from one workstation to another can be time consuming for an end-user and inconvenient to administer at a system level. The objective of this paper is to present a prototype display sharing enterprise solution. Display sharing is a system which delivers image sharing across the LAN while simultaneously managing bandwidth, supporting encryption, enabling recovery and resynchronization following a loss of signal, and, minimizing latency. Additional critical elements will include image scaling support, multi -sharing, ease of initial integration and configuration, integration with desktop window managers, collaboration tools, host and recipient controls. This goal of this paper is to summarize the various elements of an IP-based display sharing system that can be used in today's control center environment.
Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried
High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.
My time here at KSC has involved creating and updating HMI displays to support Pad 39B and the Mobile Launcher. I also had the opportunity to be involved with testing PLC hardware for Electromagnetic interferences. This report explains in more detail of the steps involved in successfully completing these responsibilities I have been fortunate enough to be involved with.
color appearance of targets with annular and Mondrian -type displays, has already begun in collaboration with L. Arend of the Eye research Institute...greatly enhanced control of the adaptation conditions, and then remeasure color constancy in the Mondrian situation. This project is also in the software
volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....
Sandy, C. L. M.; Meiyanti, R.
A measurement of height is comparing the value of the magnitude of an object with a standard measuring tool. The problems that exist in the measurement are still the use of a simple apparatus in which one of them is by using a meter. This method requires a relatively long time. To overcome these problems, this research aims to create software with image processing that is used for the measurement of height. And subsequent that image is tested, where the object captured by the video camera can be known so that the height of the object can be measured using the learning method of Otsu. The system was built using Delphi 7 of Vision Lab VCL 4.5 component. To increase the quality of work of the system in future research, the developed system can be combined with other methods.
Giaccone, Agnese; Solli, Piergiorgio; Bertolaccini, Luca
The magnetic anchoring guidance system (MAGS) is one of the most promising technological innovations in minimally invasive surgery and consists in two magnetic elements matched through the abdominal or thoracic wall. The internal magnet can be inserted into the abdominal or chest cavity through a small single incision and then moved into position by manipulating the external component. In addition to a video camera system, the inner magnetic platform can house remotely controlled surgical tools thus reducing instruments fencing, a serious inconvenience of the uniportal access. The latest prototypes are equipped with self-light-emitting diode (LED) illumination and a wireless antenna for signal transmission and device controlling, which allows bypassing the obstacle of wires crossing the field of view (FOV). Despite being originally designed for laparoscopic surgery, the MAGS seems to suit optimally the characteristics of the chest wall and might meet the specific demands of video-assisted thoracic surgery (VATS) surgery in terms of ergonomics, visualization and surgical performance; moreover, it involves less risks for the patients and an improved aesthetic outcome.
Yang, Jie Chi; Huang, Yi Ting; Tsai, Chi Cheng; Chung, Ching I.; Wu, Yu Chieh
In recent years, using video as a learning resource has received a lot of attention and has been successfully applied to many learning activities. In comparison with text-based learning, video learning integrates more multimedia resources, which usually motivate learners more than texts. However, one of the major limitations of video learning is…
Arthur, Jarvis (Trey) J., III; Shelton, Kevin J.; Prinzel, Lawrence J.; Nicholas, Stephanie N.; Williams, Steven P.; Ellis, Kyle E.; Jones, Denise R.; Bailey, Randall E.; Harrison, Stephanie J.; Barnes, James R.
Research, development, test, and evaluation of fight deck interface technologies is being conducted by the National Aeronautics and Space Administration (NASA) to proactively identify, develop, and mature tools, methods, and technologies for improving overall aircraft safety of new and legacy vehicles operating in the Next Generation Air Transportation System (NextGen). One specific area of research was the use of small Head-Worn Displays (HWDs) to serve as a possible equivalent to a Head-Up Display (HUD). A simulation experiment and a fight test were conducted to evaluate if the HWD can provide an equivalent level of performance to a HUD. For the simulation experiment, airline crews conducted simulated approach and landing, taxi, and departure operations during low visibility operations. In a follow-on fight test, highly experienced test pilots evaluated the same HWD during approach and surface operations. The results for both the simulation and fight tests showed that there were no statistical differences in the crews' performance in terms of approach, touchdown and takeoff; but, there are still technical hurdles to be overcome for complete display equivalence including, most notably, the end-to-end latency of the HWD system.
Silvia Tamez González
videoterminal los padecimientos investigados son frecuentes, en especial, los trastornos músculo-esqueléticos en manos. Además, el enriquecimiento de las tareas y el propio control del proceso laboral tuvieron efecto protector contra los trastornos psicosomáticos y la fatiga patológica.OBJECTIVE: To evaluate the association between video display terminal (VDT use and health hazards, occupational risks, and psychosocial factors, in newspaper workers. MATERIALS AND METHODS: A cross-sectional study was conducted in 1998 in a representative sample (n=68 drawn from a population of 218 VDT operators in Mexico City. Data were collected using a self-administered questionnaire. Data were confirmed by performing physical examinations. The research hypothesis was that both the current and cumulative use of VDT are associated with visual, musculoskeletal system, and skin illnesses, as well as with fatigue and mental or psychosomatic disorders. Occupational health hazards were assessed (visual problems, postural risks, sedentary work, computer mouse use, excessive heat, and overcrowding, as well as psychosocial factors related to work organization (psychological demands, work control, and social support. Prevalence ratios were adjusted for confounding variables like age, sex and schooling. RESULTS: Women were more likely than men to have upper extremity musculoskeletal disorders (MSD, dermatitis, and seborrheic eczema. VDT use was associated with neuro-visual fatigue, upper extremity MSD, dermatitis, and seborrheic eczema. Computer mouse use and postural risks were significantly associated with health problems. Psychosocial factors were mainly associated with mental problems, psychosomatic disorders, and fatigue. CONCLUSIONS: Intense use of video screens has been found to cause musculo-skeletal disorders of the hand. The diversification of tasks and control of labor processes itself had a protective effect against psychosomatic disorders and pathological fatigue.
Bhattacharya, Sharmila; Inan, Omer; Kovacs, Gregory; Etemadi, Mozziyar; Sanchez, Max; Marcu, Oana
populations in terrestrial experiments, and could be especially useful in field experiments in remote locations. Two practical limitations of the system should be noted: first, only walking flies can be observed - not flying - and second, although it enables population studies, tracking individual flies within the population is not currently possible. The system used video recording and an analog circuit to extract the average light changes as a function of time. Flies were held in a 5-cm diameter Petri dish and illuminated from below by a uniform light source. A miniature, monochrome CMOS (complementary metal-oxide semiconductor) video camera imaged the flies. This camera had automatic gain control, and this did not affect system performance. The camera was positioned 5-7 cm above the Petri dish such that the imaging area was 2.25 sq cm. With this basic setup, still images and continuous video of 15 flies at one time were obtained. To reduce the required data bandwidth by several orders of magnitude, a band-pass filter (0.3-10 Hz) circuit compressed the video signal and extracted changes in image luminance over time. The raw activity signal output of this circuit was recorded on a computer and digitally processed to extract the fly movement "events" from the waveform. These events corresponded to flies entering and leaving the image and were used for extracting activity parameters such as inter-event duration. The efficacy of the system in quantifying locomotor activity was evaluated by varying environmental temperature, then measuring the activity level of the flies.
Lloyd, Charles J.; Winterbottom, M.; Gaska, J.; Williams, L.
Over the past few decades the term "eye-limited resolution" has seen significant use. However, several variations in the definition of the term have been employed and estimates of the display pixel pitch required to achieve it differ significantly. This paper summarizes the results of published evaluations and experiments conducted in our laboratories relating to resolution requirements. The results of several evaluations employing displays with sufficient antialiasing indicate a pixel pitch of 0.5 to 0.93 arcmin will produce 90% of peak performance for observers with 20/20 or better acuity for a variety of visual tasks. If insufficient antialiasing is employed, spurious results can indicate that a finer pixel pitch is required due to the presence of sampling artifacts. The paper reconciles these findings with hyperacuity task performance which a number of authors have suggested may require a much finer pixel pitch. The empirical data provided in this paper show that hyperacuity task performance does not appear to be a driver of eye-limited resolution. Asymptotic visual performance is recommended as the basis of eye-limited resolution because it provides the most stable estimates and is well aligned with the needs of the display design and acquisition communities.
Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang
In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.
Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.
Д В Сенашенко
Full Text Available The article describes distant learning systems used in world practice. The author gives classification of video communication systems. Aspects of using Skype software in Russian Federation are discussed. In conclusion the author provides the review of modern production video conference systems used as tools for distant learning.
... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... certain video analytics software, systems, components thereof, and products containing same by reason of..., Inc. The remaining respondents are Bosch Security Systems, Inc.; Robert Bosch GmbH; Bosch...
... COMMISSION Certain Video Analytics Software, Systems, Components Thereof, and Products Containing Same... States after importation of certain video analytics software, systems, components thereof, and products...; Bosch Security Systems, Inc. of Fairpoint, New York; Samsung Techwin Co., Ltd. of Seoul, Korea; Samsung...
Montoya, R. J.; England, J. N.; Hatfield, J. J.; Rajala, S. A.
The hardware configuration, software organization, and applications software for the NASA IKONAS color graphics display system are described. The systems were created at the Langley Research Center Display Device Laboratory to develop, evaluate, and demonstrate advanced generic concepts, technology, and systems integration techniques for electronic crew station systems of future civil aircraft. A minicomputer with 64K core memory acts as a host for a raster scan graphics display generator. The architectures of the hardware system and the graphics display system are provided. The applications software features a FORTRAN-based model of an aircraft, a display system, and the utility program for real-time communications. The model accepts inputs from a two-dimensional joystick and outputs a set of aircraft states. Ongoing and planned work for image segmentation/generation, specialized graphics procedures, and higher level language user interface are discussed.
Full Text Available This work presents a novel indoor video surveillance system, capable of detecting the falls of humans. The proposed system can detect and evaluate human posture as well. To evaluate human movements, the background model is developed using the codebook method, and the possible position of moving objects is extracted using the background and shadow eliminations method. Extracting a foreground image produces more noise and damage in this image. Additionally, the noise is eliminated using morphological and size filters and this damaged image is repaired. When the image object of a human is extracted, whether or not the posture has changed is evaluated using the aspect ratio and height of a human body. Meanwhile, the proposed system detects a change of the posture and extracts the histogram of the object projection to represent the appearance. The histogram becomes the input vector of K-Nearest Neighbor (K-NN algorithm and is to evaluate the posture of the object. Capable of accurately detecting different postures of a human, the proposed system increases the fall detection accuracy. Importantly, the proposed method detects the posture using the frame ratio and the displacement of height in an image. Experimental results demonstrate that the proposed system can further improve the system performance and the fall down identification accuracy.
Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.
FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and verified video and interface functions from a standard video framework are utilized to significantly reduce development and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a Nios-II processor using Altera's Avalon Memory Mapped protocol.
Wen, Ming; Hu, Haibo
To meet the demands of high definition of video and transmission at real-time during the surgery of endoscope, this paper designs an HD mobile video transmission system. This system uses H.264/AVC to encode the original video data and transports it in the network by RTP/RTCP protocol. Meanwhile, the system implements a stable video transmission in portable terminals (such as tablet PCs, mobile phones) under the 3G mobile network. The test result verifies the strong repair ability and stability under the conditions of low bandwidth, high packet loss rate, and high delay and shows a high practical value.
Miyazaki, Daisuke; Shiba, Kensuke; Sotsuka, Koji; Matsushita, Kenji
A volumetric display system based on three-dimensional (3D) scanning of an inclined image is reported. An optical image of a two-dimensional (2D) display, which is a vector-scan display monitor placed obliquely in an optical imaging system, is moved laterally by a galvanometric mirror scanner. Inclined cross-sectional images of a 3D object are displayed on the 2D display in accordance with the position of the image plane to form a 3D image. Three-dimensional images formed by this display system satisfy all the criteria for stereoscopic vision because they are real images formed in a 3D space. Experimental results of volumetric imaging from computed-tomography images and 3D animated images are presented.
Walton, James S.; Hallamasek, Karen G.
The value of high-speed imaging for making subjective assessments is widely recognized, but the inability to acquire useful data from image sequences in a timely fashion has severely limited the use of the technology. 4DVideo has created a foundation for a generic instrument that can capture kinematic data from high-speed images. The new system has been designed to acquire (1) two-dimensional trajectories of points; (2) three-dimensional kinematics of structures or linked rigid-bodies; and (3) morphological reconstructions of boundaries. The system has been designed to work with an unlimited number of cameras configured as nodes in a network, with each camera able to acquire images at 1000 frames per second (fps) or better, with a spatial resolution of 512 X 512 or better, and an 8-bit gray scale. However, less demanding configurations are anticipated. The critical technology is contained in the custom hardware that services the cameras. This hardware optimizes the amount of information stored, and maximizes the available bandwidth. The system identifies targets using an algorithm implemented in hardware. When complete, the system software will provide all of the functionality required to capture and process video data from multiple perspectives. Thereafter it will extract, edit and analyze the motions of finite targets and boundaries.
Full Text Available The design of smart video surveillance systems is an active research field among the computer vision community because of their ability to perform automatic scene analysis by selecting and tracking the objects of interest. In this paper, we present the design and implementation of an FPGA-based standalone working prototype system for real-time tracking of an object of interest in live video streams for such systems. In addition to real-time tracking of the object of interest, the implemented system is also capable of providing purposive automatic camera movement (pan-tilt in the direction determined by movement of the tracked object. The complete system, including camera interface, DDR2 external memory interface controller, designed object tracking VLSI architecture, camera movement controller and display interface, has been implemented on the Xilinx ML510 (Virtex-5 FX130T FPGA Board. Our proposed, designed and implemented system robustly tracks the target object present in the scene in real time for standard PAL (720 × 576 resolution color video and automatically controls camera movement in the direction determined by the movement of the tracked object.
The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing.
Allin, Thomas Højgaard; Neubert, Torsten; Laursen, Steen
at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over...
Full Text Available This work presents a fall detection system that is based on image processing technology. The system can detect falling by various humans via analysis of video frame. First, the system utilizes the method of mixture and Gaussian background model to generate information about the background, and the noise and shadow of background are eliminated to extract the possible positions of moving objects. The extraction of a foreground image generates more noise and damage. Therefore, morphological and size filters are utilized to eliminate this noise and repair the damage to the image. Extraction of the foreground image yields the locations of human heads in the image. The median point, height, and aspect ratio of the people in the image are calculated. These characteristics are utilized to trace objects. The change of the characteristics of objects among various consecutive images can be used to evaluate those persons enter or leave the scene. The method of fall detection uses the height and aspect ratio of the human body, analyzes the image in which one person overlaps with another, and detects whether a human has fallen or not. Experimental results demonstrate that the proposed method can efficiently detect falls by multiple persons.
Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard
The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.
de Barros, Rui Sergio Monteiro; Brito, Marcus Vinicius Henriques; de Brito, Marcelo Houat; de Aguiar Lédo Coutinho, Jean Vitor; Teixeira, Renan Kleber Costa; Yamaki, Vitor Nagai; da Silva Costa, Felipe Lobato; Somensi, Danusa Neves
The surgical microscope is an essential tool for microsurgery. Nonetheless, several promising alternatives are being developed, including endoscopes and laparoscopes with video systems. However, these alternatives have only been used for arterial anastomoses so far. The aim of this study was to evaluate the use of a low-cost video-assisted magnification system in end-to-side neurorrhaphy in rats. Forty rats were randomly divided into four matched groups: (1) normality (sciatic nerve was exposed but was kept intact); (2) denervation (fibular nerve was sectioned, and the proximal and distal stumps were sutured-transection without repair); (3) microscope; and (4) video system (fibular nerve was sectioned; the proximal stump was buried inside the adjacent musculature, and the distal stump was sutured to the tibial nerve). Microsurgical procedures were performed with guidance from a microscope or video system. We analyzed weight, nerve caliber, number of stitches, times required to perform the neurorrhaphy, muscle mass, peroneal functional indices, latency and amplitude, and numbers of axons. There were no significant differences in weight, nerve caliber, number of stitches, muscle mass, peroneal functional indices, or latency between microscope and video system groups. Neurorrhaphy took longer using the video system (P microscope group than in the video group. It is possible to perform an end-to-side neurorrhaphy in rats through video system magnification. The success rate is satisfactory and comparable with that of procedures performed under surgical microscopes. Copyright © 2017 Elsevier Inc. All rights reserved.
National Aeronautics and Space Administration — ZIN Technologies, Inc will breadboard an integrated electronic system for space suit application to acquire images, biomedical sensor signals and suit health &...
Conclusion: The existing data collection systems using case-notes have poorly met our present day information need. Developing an electronic data collection system for monitoring patients in obstetric units is feasible in the developing world. Key Words: Maternity Care Monitoring, Health Records, Stationery [Trop J Obstet ...
Recent years have seen significant investment and increasingly effective use of Video Analytics (VA) systems to detect intrusion or attacks in sterile areas. Currently there are a number of manufacturers who have achieved the Imagery Library for Intelligent Detection System (i-LIDS) primary detection classification performance standard for the sterile zone detection scenario. These manufacturers have demonstrated the performance of their systems under evaluation conditions using an uncompressed evaluation video. In this paper we consider the effect on the detection rate of an i-LIDS primary approved sterile zone system using compressed sterile zone scenario video clips as the input. The preliminary test results demonstrate a change time of detection rate with compression as the time to alarm increased with greater compression. Initial experiments suggest that the detection performance does not linearly degrade as a function of compression ratio. These experiments form a starting point for a wider set of planned trials that the Home Office will carry out over the next 12 months.
Fern, Lisa; Rorie, R. Conrad; Pack, Jessica S.; Shively, R. Jay; Draper, Mark H.
A consortium of government, industry and academia is currently working to establish minimum operational performance standards for Detect and Avoid (DAA) and Control and Communications (C2) systems in order to enable broader integration of Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS). One subset of these performance standards will need to address the DAA display requirements that support an acceptable level of pilot performance. From a pilot's perspective, the DAA task is the maintenance of self separation and collision avoidance from other aircraft, utilizing the available information and controls within the Ground Control Station (GCS), including the DAA display. The pilot-in-the-loop DAA task requires the pilot to carry out three major functions: 1) detect a potential threat, 2) determine an appropriate resolution maneuver, and 3) execute that resolution maneuver via the GCS control and navigation interface(s). The purpose of the present study was to examine two main questions with respect to DAA display considerations that could impact pilots' ability to maintain well clear from other aircraft. First, what is the effect of a minimum (or basic) information display compared to an advanced information display on pilot performance? Second, what is the effect of display location on UAS pilot performance? Two levels of information level (basic, advanced) were compared across two levels of display location (standalone, integrated), for a total of four displays. The authors propose an eight-stage pilot-DAA interaction timeline from which several pilot response time metrics can be extracted. These metrics were compared across the four display conditions. The results indicate that the advanced displays had faster overall response times compared to the basic displays, however, there were no significant differences between the standalone and integrated displays. Implications of the findings on understanding pilot performance on the DAA task, the
Kjærgaard, Kristian; Hasman, Henrik; Schembri, Mark
to the outer membrane and secretion through the cell envelope is contained within the protein itself. Ag43 consists of two subunits (alpha and beta), where the beta-subunit forms an integral outer membrane translocator to which the alpha-subunit is noncovalently attached. The simplicity of the Ag43 system...... makes it ideally suited as a surface display scaffold. Here we demonstrate that the Ag43 alpha-module can accommodate and display correctly folded inserts and has the ability to display entire functional protein domains, exemplified by the FimH lectin domain. The presence of heterologous cysteine...... bridges does not interfere with surface display, and Ag43 chimeras are correctly processed into alpha- and beta-modules, offering optional and easy release of the chimeric alpha-subunits. Furthermore, Ag43 can be displayed in many gram-negative bacteria. This feature is exploited for display of our...
Zhang, Xue-Fang; Huang, Ren-Qun
This paper proposes a new computer-aided educational system for clothing visual merchandising and display. It aims to provide an operating environment that supports the various stages of display design in a user-friendly and intuitive manner. First, this paper provides a brief introduction to current software applications in the field of…
Bunch, Brian (Inventor)
An embodiment of the supplemental weather display system presents supplemental weather information on a display in a craft. An exemplary embodiment receives the supplemental weather information from a remote source, determines a location of the supplemental weather information relative to the craft, receives weather information from an on-board radar system, and integrates the supplemental weather information with the weather information received from the on-board radar system.
During the past academic year the focal point of this project has been to enhance the economical flight simulator system by incorporating it into the aero engineering educational environment. To accomplish this goal it was necessary to develop appropriate software modules that provide a foundation for student interaction with the system. In addition experiments had to be developed and tested to determine if they were appropriate for incorporation into the beginning flight simulation course, AERO-41B. For the most part these goals were accomplished. Experiments were developed and evaluated by graduate students. More work needs to be done in this area. The complexity and length of the experiments must be refined to match the programming experience of the target students. It was determined that few undergraduate students are ready to absorb the full extent and complexity of a real-time flight simulation. For this reason the experiments developed are designed to introduce basic computer architectures suitable for simulation, the programming environment and languages, the concept of math modules, evaluation of acquired data, and an introduction to the meaning of real-time. An overview is included of the system environment as it pertains to the students, an example of a flight simulation experiment performed by the students, and a summary of the executive programming modules created by the students to achieve a user-friendly multi-processor system suitable to an aero engineering educational program.
..., ``Nintendo''). The products accused of infringing the asserted patents are gaming systems and related... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Wireless Controllers and Components Thereof; Commission...
National Aeronautics and Space Administration — In this project, the development of a novel panoramic, stereoscopic video system was proposed. The proposed system, which contains no moving parts, uses three-fixed...
Yamada, Takaaki; Echizen, Isao; Tezuka, Satoru; Yoshiura, Hiroshi
Emerging broadband networks and high performance of PCs provide new business opportunities of the live video streaming services for the Internet users in sport events or in music concerts. Digital watermarking for video helps to protect the copyright of the video content and the real-time processing is an essential requirement. For the small start of new business, it should be achieved by flexible software without special equipments. This paper describes a novel real-time watermarking system implemented on a commodity PC. We propose the system architecture and methods to shorten watermarking time by reusing the estimated watermark imperceptibility among neighboring frames. A prototype system enables real time processing in a series of capturing NTSC signals, watermarking the video, encoding it to MPEG4 in QGVA, 1Mbps, 30fps style and storing the video for 12 hours in maximum
Ramezani, Mohsen; Yaghmaee, Farzin
In recent years, fast growth of online video sharing eventuated new issues such as helping users to find their requirements in an efficient way. Hence, Recommender Systems (RSs) are used to find the users' most favorite items. Finding these items relies on items or users similarities. Though, many factors like sparsity and cold start user impress the recommendation quality. In some systems, attached tags are used for searching items (e.g. videos) as personalized recommendation. Different views, incomplete and inaccurate tags etc. can weaken the performance of these systems. Considering the advancement of computer vision techniques can help improving RSs. To this end, content based search can be used for finding items (here, videos are considered). In such systems, a video is taken from the user to find and recommend a list of most similar videos to the query one. Due to relating most videos to humans, we present a novel low complex scalable method to recommend videos based on the model of included action. This method has recourse to human action retrieval approaches. For modeling human actions, some interest points are extracted from each action and their motion information are used to compute the action representation. Moreover, a fuzzy dissimilarity measure is presented to compare videos for ranking them. The experimental results on HMDB, UCFYT, UCF sport and KTH datasets illustrated that, in most cases, the proposed method can reach better results than most used methods.
Zhang, Yongjun; Ling, Zhi
Smart home is a system to control home devices which are more and more popular in our daily life. Mobile intelligent terminals based on smart homes have been developed, make remote controlling and monitoring possible with smartphones or tablets. On the other hand, 3D stereo display technology developed rapidly in recent years. Therefore, a iPad-based smart home system adopts autostereoscopic display as the control interface is proposed to improve the userfriendliness of using experiences. In consideration of iPad's limited hardware capabilities, we introduced a 3D image synthesizing method based on parallel processing with Graphic Processing Unit (GPU) implemented it with OpenGL ES Application Programming Interface (API) library on IOS platforms for real-time autostereoscopic displaying. Compared to the traditional smart home system, the proposed system applied autostereoscopic display into smart home system's control interface enhanced the reality, user-friendliness and visual comfort of interface.
Jacobsen, R. A.; Bivens, C. C.; Rediess, N. A.; Hindson, W. S.; Aiken, E. W.; Aiken, Edwin W. (Technical Monitor)
The Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) is a UH-60A Black Hawk helicopter that is being modified by the US Army and NASA for flight systems research. The principal systems that are being installed in the aircraft are a Helmet Mounted Display (HMD) and imaging system, and a programmable full authority Research Flight Control System (RFCS). In addition, comprehensive instrumentation of both the rigid body of the helicopter and the rotor system is provided. The paper will describe the capabilities of these systems and their current state of development. A brief description of initial research applications is included. The wide (40 X 60 degree) field-of-view HMD system has been provided by Kaiser Electronics. It can be configured as a monochromatic system for use in bright daylight conditions, a two color system for darker ambients, or a full color system for use in night viewing conditions. Color imagery is achieved using field sequential video and a mechanical color wheel. In addition to the color symbology, high resolution computer-gene rated imagery from an onboard Silicon Graphics Reality Engine Onyx processor is available for research in virtual reality applications. This synthetic imagery can also be merged with real world video from a variety of imaging systems that can be installed easily on the front of the helicopter. These sensors include infrared or tv cameras, or potentially small millimeter wave radars. The Research Flight Control System is being developed for the aircraft by a team of contractors led by Boeing Helicopters. It consists of a full authority high bandwidth fly-by-wire actuators that drive the main rotor swashplate actuators and the tail rotor actuator in parallel. This arrangement allows the basic mechanical flight control system of the Black Hawk to be retained so that the safety pilot can monitor the operation of the system through the action of his own controls. The evaluation pilot will signal the fly
Pillastrini, Paolo; Mugnai, Raffaele; Bertozzi, Lucia; Costi, Stefania; Curti, Stefania; Guccione, Andrew; Mattioli, Stefano; Violante, Francesco S
This study investigated the effectiveness of a workstation ergonomic intervention for work-related posture and low back pain (LBP) in Video Display Terminal (VDT) workers. 100 VDT workers were selected to receive the ergonomic intervention, whereas 100 were assigned to a control group. The two groups were then crossed-over after 30 months from baseline. Follow-ups were repeated at 5, 12, and 30 months from baseline and then at 6 months following crossover. Work-related posture and LBP point-prevalence using the Rapid Entire Body Assessment method and a Pain Drawing, respectively. The ergonomic intervention at the workstation improved work-related posture and was effective in reducing LBP point-prevalence both in the first study period and after crossover, and these effects persisted for at least 30 months. In conclusion, our findings contribute to the evidence that individualized ergonomic interventions may be able to improve work-related posture and reduce LBP for VDT workers. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Full Text Available The potential effect of electromagnetic fields (EMFs emitted from video display terminals (VDTs to elicit biological response is a major concern for the public. The software professionals are subjected to cumulative EMFs in their occupational environments. This study was undertaken to evaluate DNA damage and incidences of micronuclei in such professionals. To the best of our knowledge, the present study is the first attempt to carry out cytogenetic investigations on assessing bioeffects in personal computer users. The study subjects (n = 138 included software professionals using VDTs for more than 2 years with age, gender, socioeconomic status matched controls (n = 151. DNA damage and frequency of micronuclei were evaluated using alkaline comet assay and cytochalasin blocked micronucleus assay respectively. Overall DNA damage and incidence of micronuclei showed no significant differences between the exposed and control subjects. With exposure characteristics, such as total duration (years and frequency of use (minutes/day sub-groups were assessed for such parameters. Although cumulative frequency of use showed no significant changes in the DNA integrity of the classified sub-groups, the long-term users (> 10 years showed higher induction of DNA damage and increased frequency of micronuclei and micro nucleated cells.
... systems for the delivery of video programming. 63.02 Section 63.02 Telecommunication FEDERAL... systems for the delivery of video programming. (a) Any common carrier is exempt from the requirements of... with respect to the establishment or operation of a system for the delivery of video programming. ...
... COMMISSION In the Matter of Certain Video Game Systems and Wireless Controllers and Components Thereof... importation, and the sale within the United States after importation of certain video game systems and... importation of certain video game systems and wireless controllers and components thereof that infringe one or...
Bales, John W.
The F64 frame grabber is a high performance video image acquisition and processing board utilizing the TMS320C40 and TMS34020 processors. The hardware is designed for the ISA 16 bit bus and supports multiple digital or analog cameras. It has an acquisition rate of 40 million pixels per second, with a variable sampling frequency of 510 kHz to MO MHz. The board has a 4MB frame buffer memory expandable to 32 MB, and has a simultaneous acquisition and processing capability. It supports both VGA and RGB displays, and accepts all analog and digital video input standards.
National Aeronautics and Space Administration — Physical Optics Corporation (POC) proposes to develop a 3D cockpit display (3D-COD) system for improved pilot situational awareness and safety in 3D airspace by...
National Aeronautics and Space Administration — The physical world around us is three-dimensional (3D), yet most existing display systems with flat screens can handle only two-dimensional (2D) flat images that...
Masaoka, Kenichiro; Nishida, Yukihiro; Sugawara, Masayuki
.... Although monochromatic light sources are required for a display to perfectly fulfill the system colorimetry, highly saturated emission colors using recent quantum dot technology may effectively achieve the wide gamut...
Egan, J. T.; Hart, J.; Burt, S. K.; Macelroy, R. D.
A visualization of molecular models can lead to a clearer understanding of the models. Sophisticated graphics devices supported by minicomputers make it possible for the chemist to interact with the display of a very large model, altering its structure. In addition to user interaction, the need arises also for other ways of displaying information. These include the production of viewgraphs, film presentation, as well as publication quality prints of various models. To satisfy these needs, the display capability of the Ames Interactive Modeling System (AIMS) has been enhanced to provide a wide range of graphics and plotting capabilities. Attention is given to an overview of the AIMS system, graphics hardware used by the AIMS display subsystem, a comparison of graphics hardware, the representation of molecular models, graphics software used by the AIMS display subsystem, the display of a model obtained from data stored in molecule data base, a graphics feature for obtaining single frame permanent copy displays, and a feature for producing multiple frame displays.
Aranyanak, Inthraporn; Reilly, Ronan G
This article describes a cheap and easy-to-use finger-tracking system for studying braille reading. It provides improved spatial and temporal resolution over the current available solutions and can be used with either a refreshable braille display or braille-embossed paper. In conjunction with a refreshable braille display, the tracking system has the unique capacity to implement display-change paradigms derived from sighted reading research. This will allow researchers to probe skilled braille reading in significantly more depth than has heretofore been possible.
Full Text Available Abstract Today's video surveillance systems are increasingly equipped with video content analysis for a great variety of applications. However, reliability and robustness of video content analysis algorithms remain an issue. They have to be measured against ground truth data in order to quantify the performance and advancements of new algorithms. Therefore, a variety of measures have been proposed in the literature, but there has neither been a systematic overview nor an evaluation of measures for specific video analysis tasks yet. This paper provides a systematic review of measures and compares their effectiveness for specific aspects, such as segmentation, tracking, and event detection. Focus is drawn on details like normalization issues, robustness, and representativeness. A software framework is introduced for continuously evaluating and documenting the performance of video surveillance systems. Based on many years of experience, a new set of representative measures is proposed as a fundamental part of an evaluation framework.
Full Text Available Today's video surveillance systems are increasingly equipped with video content analysis for a great variety of applications. However, reliability and robustness of video content analysis algorithms remain an issue. They have to be measured against ground truth data in order to quantify the performance and advancements of new algorithms. Therefore, a variety of measures have been proposed in the literature, but there has neither been a systematic overview nor an evaluation of measures for specific video analysis tasks yet. This paper provides a systematic review of measures and compares their effectiveness for specific aspects, such as segmentation, tracking, and event detection. Focus is drawn on details like normalization issues, robustness, and representativeness. A software framework is introduced for continuously evaluating and documenting the performance of video surveillance systems. Based on many years of experience, a new set of representative measures is proposed as a fundamental part of an evaluation framework.
ARI Research Note 88-106 Human Cognition and Information Display in C31 System Tasks N William C. Howell, David M. Lane, and Kritina L. Holden Rice...Classification) Human Cognition and Information Display in C31 System Tasks 12. PERSONAL AUTHOR(S) Howell, William C.; Lane, David M.: and Holden, Kritina L...nuclear control room improvements through analysis of critical operator decisions. In R. C. Sugarman (Ed.), Proceedings of the 25th Annual Meetir, of
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
Panayides, A S; Pattichis, M S; Constantinides, A G; Pattichis, C S
The emergence of the new, High Efficiency Video Coding (HEVC) standard, combined with wide deployment of 4G wireless networks, will provide significant support toward the adoption of mobile-health (m-health) medical video communication systems in standard clinical practice. For the first time since the emergence of m-health systems and services, medical video communication systems can be deployed that can rival the standards of in-hospital examinations. In this paper, we provide a thorough overview of today's advancements in the field, discuss existing approaches, and highlight the future trends and objectives.
I M.O. Widyantara
Full Text Available Video surveillance system (VSS is an monitoring system based-on IP-camera. VSS implemented in live streaming and serves to observe and monitor a site remotely. Typically, IP- camera in the VSS has a management software application. However, for ad hoc applications, where the user wants to manage VSS independently, application management software has become ineffective. In the IP-camera installation spread over a large area, an administrator would be difficult to describe the location of the IP-camera. In addition, monitoring an area of IP- Camera will also become more difficult. By looking at some of the flaws in VSS, this paper has proposed a VSS application for easy monitoring of each IP Camera. Applications that have been proposed to integrate the concept of web-based geographical information system with the Google Maps API (Web-GIS. VSS applications built with smart features include maps ip-camera, live streaming of events, information on the info window and marker cluster. Test results showed that the application is able to display all the features built well
National Research Council Staff
... Sciences and Education National Research Council NATIONAL ACADEMY PRESS Washington, D.C. 1983 Copyrightthe cannot be not from book, paper however, version for formatting, original authoritative the typesetting-specific the as from created publication files XML from other this and of recomposed styles, version heading pri...
Lin, Bor-Shyh; Wu, Pei-Jung; Chen, Chien-Yu
Recently, 2D/3D switchable displays have become the mainstream in 3D display technologies, and people can now watch 3D movies with a naked 2D/3D switchable display at home. However, some studies have indicated that people might encounter visual fatigue after enjoying a 3D film in the theater. Although 2D/3D switchable technologies have been widely developed, 3D display technologies are still lacking in ergonomic and human-care factors such as reducing visual fatigue. This study proposes a novel 2D/3D display auto-adjustment switch system to provide biofeedback functions to reduce users' visual fatigue. In addition, the relationship between the blink rate and the visual fatigue state while watching 3D films was investigated and quantified. In this study, liquid crystal barrier technology was used to develop a 2D/ 3D switchable display, and a wearable EOG acquisition device was also designed to monitor electrooculography signals to estimate the blink rate. Here, the 2D/3D display auto-adjustment criterion of the proposed system was designed according to the change in the visual fatigue state as estimated from the blink rate. Finally, the experimental results show that the proposed system could effectively reduce users' visual fatigue while watching 3D films.
Quist, Arbor J L; Hickman, Thu-Trang T; Amato, Mary G; Volk, Lynn A; Salazar, Alejandra; Robertson, Alexandra; Wright, Adam; Bates, David W; Phansalkar, Shobha; Lambert, Bruce L; Schiff, Gordon D
The variations in how drug names are displayed in computerized prescriber-order-entry (CPOE) systems were analyzed to determine their contribution to potential medication errors. A diverse set of 10 inpatient and outpatient CPOE system vendors and self-developed CPOE systems in 6 U.S. healthcare institutions was evaluated. A team of pharmacists, physicians, patient-safety experts, and informatics experts created a CPOE assessment tool to standardize the assessment of CPOE features across the systems studied. Hypothetical scenarios were conducted with test patients to study the medication ordering workflow and ways in which medications were displayed in each system. Brand versus generic drug name ordering was studied at 1 large outpatient system to understand why prescribers ordered both brand and generic forms of the same drug. Widespread variations in the display of drug names were observed both within and across the 6 study sites and 10 systems, including the inconsistent display of brand and generic names. Some displayed drugs differently even on the same screen. Combination products were often displayed inconsistently, and some systems required prescribers to know the first drug listed in the combination in order for the correct product to appear in a search. It also appeared that prescribers may have prescribed both brand and generic forms of the same medication, creating the potential for drug duplication errors. A review of 10 CPOE systems revealed that medication names were displayed inconsistently, which can result in confusion or errors in reviewing, selecting, and ordering medications. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Gershzohn, Gary R.; Sirko, Robert J.; Zimmerman, K.; Jones, A. D.
This task concerns the design, development, testing, and evaluation of a new proximity operations planning and flight guidance display and control system for manned space operations. A forecast, derivative manned maneuvering unit (MMU) was identified as a candidate for the application of a color, highway-in-the-sky display format for the presentation of flight guidance information. A silicon graphics 4D/20-based simulation is being developed to design and test display formats and operations concepts. The simulation includes the following: (1) real-time color graphics generation to provide realistic, dynamic flight guidance displays and control characteristics; (2) real-time graphics generation of spacecraft trajectories; (3) MMU flight dynamics and control characteristics; (4) control algorithms for rotational and translational hand controllers; (5) orbital mechanics effects for rendezvous and chase spacecraft; (6) inclusion of appropriate navigation aids; and (7) measurement of subject performance. The flight planning system under development provides for: (1) selection of appropriate operational modes, including minimum cost, optimum cost, minimum time, and specified ETA; (2) automatic calculation of rendezvous trajectories, en route times, and fuel requirements; (3) and provisions for manual override. Man/machine function allocations in planning and en route flight segments are being evaluated. Planning and en route data are presented on one screen composed of two windows: (1) a map display presenting a view perpendicular to the orbital plane, depicting flight planning trajectory and time data attitude display presenting attitude and course data for use en route; and (2) an attitude display presenting local vertical-local horizontal attitude data superimposed on a highway-in-the-sky or flight channel representation of the flight planned course. Both display formats are presented while the MMU is en route. In addition to these displays, several original display
Ozbek, Christopher S.; Giesler, Bjorn; Dillmann, Ruediger
A fundamental decision in building augmented reality (AR) systems is how to accomplish the combining of the real and virtual worlds. Nowadays this key-question boils down to the two alternatives video-see-through (VST) vs. optical-see-through (OST). Both systems have advantages and disadvantages in areas like production-simplicity, resolution, flexibility in composition strategies, field of view etc. To provide additional decision criteria for high dexterity, accuracy tasks and subjective user-acceptance a gaming environment was programmed that allowed good evaluation of hand-eye coordination, and that was inspired by the Star Wars movies. During an experimentation session with more than thirty participants a preference for optical-see-through glasses in conjunction with infra-red-tracking was found. Especially the high-computational demand for video-capture, processing and the resulting drop in frame rate emerged as a key-weakness of the VST-system.
Mathiak, Krystyna A; Klasen, Martin; Weber, René; Ackermann, Hermann; Shergill, Sukhwinder S; Mathiak, Klaus
.... It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes...
Xia, Zhen-Hua; Wang, Xiao-Shuang
With the rapid development of the electronic technology, multimedia technology and mobile communication technology, video monitoring system is going to the embedded, digital and wireless direction. In this paper, a solution of wireless video monitoring system based on WCDMA is proposed. This solution makes full use of the advantages of 3G, which have Extensive coverage network and wide bandwidth. It can capture the video streaming from the chip's video port, real-time encode the image data by the high speed DSP, and have enough bandwidth to transmit the monitoring image through WCDMA wireless network. The experiments demonstrate that the system has the advantages of high stability, good image quality, good transmission performance, and in addition, the system has been widely used, not be restricted by geographical position since it adopts wireless transmission. So, it is suitable used in sparsely populated, harsh environment scenario.
Zhang, Xiaodan; Hu, Chundong; Sheng, Peng; Zhao, Yuanzhe; Wu, Deyun; Cui, Qinglong
The data acquisition and remote real-time display system for the neutral beam injectors (NBI) on experimental advanced superconducting tokamak (EAST) are described in this paper. Distributed computer systems including local data acquisition (DAQ) facility, remote data server (DS), real-time display terminal are adopted with Linux and Windows operating system. Experimental signals are gathered by DAQ device at local working field. On the one hand, these gathered data will be sent to DS which runs on remote server main control layer on EAST NBI control network for saving and processing; on the other hand, these data will be sent to real-time display terminal which runs on remote monitoring layer on EAST NBI for displaying and monitoring experimental signals real-timely. Another point needs to be mentioned is that the real-time display software can call back historical data from DS for querying. The software of data acquisition and DS are programmed by C language while the real-time display software is programmed by Labview flow chart. The hardware mainly includes DAQ cards, server, industrial personal computer and others auxiliary hardware. Now the system proved to be performed well through experiments on NBI testing bed.
classic films Ii- into separate FM signals for video dual soundtrack or stereo sound censed from nearlk every major stu- and audio. Another...though never disruptive. While my enthusiasm for the subject was distinctly lim- i’ed. I felt almost as if Iwere in the presence of a histori - cally
Deleuze, Jory; Christiaens, Maxime; Nuyens, Filip; Billieux, Joël
Studies have shown that regular video game use might improve cognitive and social skills. In contrast, other studies have documented the negative outcomes of excessive gaming vis-a-vis health and socioprofessional spheres. Both positive and negative outcomes of video game use were linked to their structural characteristics (i.e., features that make the game appealing or are inducements for all gamers to keep playing regularly). The current study tested whether active video gamers from main ge...
Full Text Available Video surveillance systems are based on video and image processing research areas in the scope of computer science. Video processing covers various methods which are used to browse the changes in existing scene for specific video. Nowadays, video processing is one of the important areas of computer science. Two-dimensional videos are used to apply various segmentation and object detection and tracking processes which exists in multimedia content-based indexing, information retrieval, visual and distributed cross-camera surveillance systems, people tracking, traffic tracking and similar applications. Background subtraction (BS approach is a frequently used method for moving object detection and tracking. In the literature, there exist similar methods for this issue. In this research study, it is proposed to provide a more efficient method which is an addition to existing methods. According to model which is produced by using adaptive background subtraction (ABS, an object detection and tracking system’s software is implemented in computer environment. The performance of developed system is tested via experimental works with related video datasets. The experimental results and discussion are given in the study
Ilgner, Justus; Park, Jonas Jae-Hyun; Labbé, Daniel; Westhofen, Martin
Introduction: While there is an increasing demand for minimally invasive operative techniques in Ear, Nose and Throat surgery, these operations are difficult to learn for junior doctors and demanding to supervise for experienced surgeons. The motivation for this study was to integrate high-definition (HD) stereoscopic video monitoring in microscopic surgery in order to facilitate teaching interaction between senior and junior surgeon. Material and methods: We attached a 1280x1024 HD stereo camera (TrueVisionSystems TM Inc., Santa Barbara, CA, USA) to an operating microscope (Zeiss ProMagis, Zeiss Co., Oberkochen, Germany), whose images were processed online by a PC workstation consisting of a dual IntelÂ® XeonÂ® CPU (Intel Co., Santa Clara, CA). The live image was displayed by two LCD projectors @ 1280x768 pixels on a 1,25m rear-projection screen by polarized filters. While the junior surgeon performed the surgical procedure based on the displayed stereoscopic image, all other participants (senior surgeon, nurse and medical students) shared the same stereoscopic image from the screen. Results: With the basic setup being performed only once on the day before surgery, fine adjustments required about 10 minutes extra during the operation schedule, which fitted into the time interval between patients and thus did not prolong operation times. As all relevant features of the operative field were demonstrated on one large screen, four major effects were obtained: A) Stereoscopy facilitated orientation for the junior surgeon as well as for medical students. B) The stereoscopic image served as an unequivocal guide for the senior surgeon to demonstrate the next surgical steps to the junior colleague. C) The theatre nurse shared the same image, anticipating the next instruments which were needed. D) Medical students instantly share the information given by all staff and the image, thus avoiding the need for an extra teaching session. Conclusion: High definition
M. van Persie
Full Text Available During a fire incident live airborne video offers the fire brigade an additional means of information. Essential for the effective usage of the daylight and infra red video data from the UAS is that the information is fully integrated into the crisis management system of the fire brigade. This is a GIS based system in which all relevant geospatial information is brought together and automatically distributed to all levels of the organisation. In the context of the Dutch Fire-Fly project a geospatial video server was integrated with a UAS and the fire brigades crisis management system, so that real-time geospatial airborne video and derived products can be made available at all levels during a fire incident. The most important elements of the system are the Delftdynamics Robot Helicopter, the Video Multiplexing System, the Keystone geospatial video server/editor and the Eagle and CCS-M crisis management systems. In discussion with the Security Region North East Gelderland user requirements and a concept of operation were defined, demonstrated and evaluated. This article describes the technical and operational approach and results.
Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui
Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.
Full Text Available To make people at different places participate in the same conference, speak and discuss freely, the interactive remote video conferencing system is designed and realized based on multi-Agent collaboration. FEC (forward error correction and tree P2P technology are firstly used to build a live conference structure to transfer audio and video data; then the branch conference port can participate to speak and discuss through the application of becoming a interactive focus; the introduction of multi-Agent collaboration technology improve the system robustness. The experiments showed that, under normal network conditions, the system can support 350 branch conference node simultaneously to make live broadcasting. The audio and video quality is smooth. It can carry out large-scale remote video conference.
Norris, Jade Eloise; Castronovo, Julie
Much research has investigated the relationship between the Approximate Number System (ANS) and mathematical achievement, with continued debate surrounding the existence of such a link. The use of different stimulus displays may account for discrepancies in the findings. Indeed, closer scrutiny of the literature suggests that studies supporting a link between ANS acuity and mathematical achievement in adults have mostly measured the ANS using spatially intermixed displays (e.g. of blue and yellow dots), whereas those failing to replicate a link have primarily used spatially separated dot displays. The current study directly compared ANS acuity when using intermixed or separate dots, investigating how such methodological variation mediated the relationship between ANS acuity and mathematical achievement. ANS acuity was poorer and less reliable when measured with intermixed displays, with performance during both conditions related to inhibitory control. Crucially, mathematical achievement was significantly related to ANS accuracy difference (accuracy on congruent trials minus accuracy on incongruent trials) when measured with intermixed displays, but not with separate displays. The findings indicate that methodological variation affects ANS acuity outcomes, as well as the apparent relationship between the ANS and mathematical achievement. Moreover, the current study highlights the problem of low reliabilities of ANS measures. Further research is required to construct ANS measures with improved reliability, and to understand which processes may be responsible for the increased likelihood of finding a correlation between the ANS and mathematical achievement when using intermixed displays.
Jade Eloise Norris
Full Text Available Much research has investigated the relationship between the Approximate Number System (ANS and mathematical achievement, with continued debate surrounding the existence of such a link. The use of different stimulus displays may account for discrepancies in the findings. Indeed, closer scrutiny of the literature suggests that studies supporting a link between ANS acuity and mathematical achievement in adults have mostly measured the ANS using spatially intermixed displays (e.g. of blue and yellow dots, whereas those failing to replicate a link have primarily used spatially separated dot displays. The current study directly compared ANS acuity when using intermixed or separate dots, investigating how such methodological variation mediated the relationship between ANS acuity and mathematical achievement. ANS acuity was poorer and less reliable when measured with intermixed displays, with performance during both conditions related to inhibitory control. Crucially, mathematical achievement was significantly related to ANS accuracy difference (accuracy on congruent trials minus accuracy on incongruent trials when measured with intermixed displays, but not with separate displays. The findings indicate that methodological variation affects ANS acuity outcomes, as well as the apparent relationship between the ANS and mathematical achievement. Moreover, the current study highlights the problem of low reliabilities of ANS measures. Further research is required to construct ANS measures with improved reliability, and to understand which processes may be responsible for the increased likelihood of finding a correlation between the ANS and mathematical achievement when using intermixed displays.
Baird, Darren; Bernatovich, Mike; Gillespie, Ellen; Kadwa, Binaifer; Matthews, Dave; Penny, Wes; Zak, Tim; Grant, Mike; Bihari, Brian
The Orion spacecraft is designed to return astronauts to a landing within 10 km of the intended landing target from low Earth orbit, lunar direct-entry, and lunar skip-entry trajectories. Al pile the landing is nominally controlled autonomously, the crew can fly precision entries manually in the event of an anomaly. The onboard entry displays will be used by the crew to monitor and manually fly the entry, descent, and landing, while the Entry Monitor System (EMS) will be used to monitor the health and status of the onboard guidance and the trajectory. The entry displays are driven by the entry display feeder, part of the Entry Monitor System (EMS). The entry re-targeting module, also part of the EMS, provides all the data required to generate the capability footprint of the vehicle at any point in the trajectory, which is shown on the Primary Flight Display (PFD). It also provides caution and warning data and recommends the safest possible re-designated landing site when the nominal landing site is no longer within the capability of the vehicle. The PFD and the EMS allow the crew to manually fly an entry trajectory profile from entry interface until parachute deploy having the flexibility to manually steer the vehicle to a selected landing site that best satisfies the priorities of the crew. The entry display feeder provides data from the ENIS and other components of the GNC flight software to the displays at the proper rate and in the proper units. It also performs calculations that are specific to the entry displays and which are not made in any other component of the flight software. In some instances, it performs calculations identical to those performed by the onboard primary guidance algorithm to protect against a guidance system failure. These functions and the interactions between the entry display feeder and the other components of the EMS are described.
Frank, R; Bethel, W
This case study highlights the technical challenges of creating an application that uses a multithreaded scene graph toolkit for rendering and uses a software environment for management of tiled display systems. Scene graph toolkits simplify and streamline graphics applications by providing data management and rendering services. Software for tiled display environments typically performs device and event management by opening windows on displays, by gathering and processing input device events, and by orchestrating the execution of application rendering code. These environments serve double-duty as frameworks for creating parallel rendering applications. We explore technical issues related to interfacing scene graph systems with software that manages tiled projection systems in the context of an implementation, and formulate suggestions for the future growth of such systems.
Yoshida, Soichiro; Kihara, Kazunori; Takeshita, Hideki; Nakanishi, Yasukazu; Kijima, Toshiki; Ishioka, Junichiro; Matsuoka, Yoh; Numao, Noboru; Saito, Kazutaka; Fujii, Yasuhisa
The personal head-mounted display (HMD) has emerged as a novel image monitoring system. We present here the application of a high-definition organic electroluminescent binocular HMD in ureteral stent placement. Our HMD system displayed multiple forms of information such as integrated, sharp, high-contrast images using a four-split screen or a picture-in-picture technique both seamlessly and synchronously. When both the operator and the assistant wore an HMD, they could continuously and simultaneously monitor the cystoscopic and fluoroscopic images in an ergonomically natural position. Furthermore, each participant was able to modulate the displayed images depending on the procedure. In all five cases, both the operator and the assistant successfully used this system with no unfavorable event. No participants experienced any HMD wear-related adverse effects. We therefore believe this HMD system might be potentially beneficial during ureteral stent placement procedures. Furthermore, it is compact, easily introduced and affordable. © 2014 S. Karger AG, Basel.
Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.
Masaoka, Kenichiro; Nishida, Yukihiro; Sugawara, Masayuki
The wide-gamut system colorimetry has been standardized for ultra-high definition television (UHDTV). The chromaticities of the primaries are designed to lie on the spectral locus to cover major standard system colorimetries and real object colors. Although monochromatic light sources are required for a display to perfectly fulfill the system colorimetry, highly saturated emission colors using recent quantum dot technology may effectively achieve the wide gamut. This paper presents simulation results on the chromaticities of highly saturated non-monochromatic light sources and gamut coverage of real object colors to be considered in designing wide-gamut displays with color filters for the UHDTV.
Scollato, A; Perrini, P; Benedetto, N; Di Lorenzo, N
We propose an easy-to-construct digital video editing system ideal to produce video documentation and still images. A digital video editing system applicable to many video sources in the operating room is described in detail. The proposed system has proved easy to use and permits one to obtain videography quickly and easily. Mixing different streams of video input from all the devices in use in the operating room, the application of filters and effects produces a final, professional end-product. Recording on a DVD provides an inexpensive, portable and easy-to-use medium to store or re-edit or tape at a later time. From stored videography it is easy to extract high-quality, still images useful for teaching, presentations and publications. In conclusion digital videography and still photography can easily be recorded by the proposed system, producing high-quality video recording. The use of firewire ports provides good compatibility with next-generation hardware and software. The high standard of quality makes the proposed system one of the lowest priced products available today.
Jan Kuijten; Ajda Ortac; Hans Maier; Gert de Heer
To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels).
Javidi, Bahram; Son, Jung-Young
Three-Dimensional Imaging, Visualization, and Display describes recent developments, as well as the prospects and challenges facing 3D imaging, visualization, and display systems and devices. With the rapid advances in electronics, hardware, and software, 3D imaging techniques can now be implemented with commercially available components and can be used for many applications. This volume discusses the state-of-the-art in 3D display and visualization technologies, including binocular, multi-view, holographic, and image reproduction and capture techniques. It also covers 3D optical systems, 3D display instruments, 3D imaging applications, and details several attractive methods for producing 3D moving pictures. This book integrates the background material with new advances and applications in the field, and the available online supplement will include full color videos of 3D display systems. Three-Dimensional Imaging, Visualization, and Display is suitable for electrical engineers, computer scientists, optical e...
Heckendorn, F.M.; Robinson, C.W.
Specialized miniature low cost video equipment has been effectively used in a number of remote, radioactive, and contaminated environments at the Savannah River Site (SRS). The equipment and related techniques have reduced the potential for personnel exposure to both radiation and physical hazards. The valuable process information thus provided would not have otherwise been available for use in improving the quality of operation at SRS.
Bosma, T; Kanninga, R; Neef, J; Audouy, SAL; van Roosmalen, ML; Steen, A; Buist, G; Kok, J; Kuipers, OP; Robillard, G; Leenhouts, K
A novel display system is described that allows highly efficient immobilization of heterologous proteins on bacterial surfaces in applications for which the use of genetically modified bacteria is less desirable. This system is based on nonliving and non-genetically modified gram-positive bacterial
Brons, L.; Greef, T. de; Kleij, R. van der
Motivation - Both multi-team systems and awareness displays have been studied more often in the past years, but there hasn't been much focus on the combination of these two subjects. Apart from doing so, we are particularly interested in the difficulties encountered when multi-team systems are
Ehlert, Steven; Kingery, Aaron; Suggs, Robert
We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag , and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.
Ehlert, Steven; Kingery, Aaron; Suggs, Robert
We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.
Ferreira, João, E-mail: email@example.com [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)
Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.
Wang, S S; Starren, J
Computer-based patient records are traditionally composed of textual data. Integration of multimedia data has been historically slow. Multimedia data such as image, audio, and video have been traditionally more difficult to handle. An implementation of a clinical system for multimedia data is discussed. The system implementation uses Java, Secure Socket Layer (SSL), and Oracle 8i. The system is on top of the Internet so it is architectural independent, cross-platform, cross-vendor, and secure. Design and implementations issues are discussed.
Full Text Available This paper presents a pool supporting system with a camera-mounted handheld display based on augmented reality technology. By using our system, users can get supporting information when they once capture a pool table. They can also watch visual aids through the display while they are capturing the table. First, our system estimates ball positions on the table with one image taken from an arbitrary viewpoint. Next, our system provides several shooting ways considering the next shooting way. Finally, our system presents visual aids such as shooting direction and ball behavior. Main purpose of our system is to estimate and analyze the distribution of balls and to present visual aids. Our system is implemented without special equipment such as a magnetic sensor or artificial markers. For evaluating our system, the accuracy of ball positions and the effectiveness of our supporting information are presented
Fog, Benedikte; Ulfkjær, Jacob Kanneworff Stigsen; Schlichter, Bjarne Rerup
not sufficiently reflect the theoretical recommendations of using video optimally in a management education. It did not comply with the video learning sequence as introduced by Marx and Frost (1998). However, it questions if the level of cognitive orientation activities can become too extensive. It finds......The study of business information systems has become increasingly important in the Digital Economy. However, it has been found that students have difficulties understanding the practical implications thereof and this leads to a motivational decreases. This study aims to investigate how to optimize...... the use of video to increase comprehension of the practical implications of studying business information systems. This qualitative study is based on observations and focus group interviews with first semester business students. The findings suggest that the video examined in the case study did...
Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos
In emergency situations, the ability to remotely monitor unfolding events using high-quality video feeds will significantly improve the incident commander's understanding of the situation and thereby aids effective decision making. This paper presents a novel, adaptive video monitoring system for emergency situations where the normal communications network infrastructure has been severely impaired or is no longer operational. The proposed scheme, operating over a rapidly deployable wireless mesh network, supports real-time video feeds between first responders, forward operating bases and primary command and control centers. Video feeds captured on portable devices carried by first responders and by static visual sensors are encoded in H.264/SVC, the scalable extension to H.264/AVC, allowing efficient, standard-based temporal, spatial, and quality scalability of the video. A three-tier video delivery system is proposed, which balances the need to avoid overuse of mesh nodes with the operational requirements of the emergency management team. In the first tier, the video feeds are delivered at a low spatial and temporal resolution employing only the base layer of the H.264/SVC video stream. Routing in this mode is designed to employ all nodes across the entire mesh network. In the second tier, whenever operational considerations require that commanders or operators focus on a particular video feed, a `fidelity control' mechanism at the monitoring station sends control messages to the routing and scheduling agents in the mesh network, which increase the quality of the received picture using SNR scalability while conserving bandwidth by maintaining a low frame rate. In this mode, routing decisions are based on reliable packet delivery with the most reliable routes being used to deliver the base and lower enhancement layers; as fidelity is increased and more scalable layers are transmitted they will be assigned to routes in descending order of reliability. The third tier
Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.
Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.
Kramer, P D; Roberts, D C; Shelhamer, M; Zee, D S
Testing of the vestibular system requires a vestibular stimulus (motion) and/or a visual stimulus. We have developed a versatile, low cost, stereoscopic visual display system, using "virtual reality" (VR) technology. The display system can produce images for each eye that correspond to targets at any virtual distance relative to the subject, and so require the appropriate ocular vergence. We elicited smooth pursuit, "stare" optokinetic nystagmus (OKN) and after-nystagmus (OKAN), vergence for targets at various distances, and short-term adaptation of the vestibulo-ocular reflex (VOR), using both conventional methods and the stereoscopic display. Pursuit, OKN, and OKAN were comparable with both methods. When used with a vestibular stimulus, VR induced appropriate adaptive changes of the phase and gain of the angular VOR. In addition, using the VR display system and a human linear acceleration sled, we adapted the phase of the linear VOR. The VR-based stimulus system not only offers an alternative to more cumbersome means of stimulating the visual system in vestibular experiments, it also can produce visual stimuli that would otherwise be impractical or impossible. Our techniques provide images without the latencies encountered in most VR systems. Its inherent versatility allows it to be useful in several different types of experiments, and because it is software driven it can be quickly adapted to provide a new stimulus. These two factors allow VR to provide considerable savings in time and money, as well as flexibility in developing experimental paradigms.
Cutolo, Fabrizio; Meola, Antonio; Carbone, Marina; Sinceri, Sara; Cagnazzo, Federico; Denaro, Ennio; Esposito, Nicola; Ferrari, Mauro; Ferrari, Vincenzo
Benefits of minimally invasive neurosurgery mandate the development of ergonomic paradigms for neuronavigation. Augmented Reality (AR) systems can overcome the shortcomings of commercial neuronavigators. The aim of this work is to apply a novel AR system, based on a head-mounted stereoscopic video see-through display, as an aid in complex neurological lesion targeting. Effectiveness was investigated on a newly designed patient-specific head mannequin featuring an anatomically realistic brain phantom with embedded synthetically created tumors and eloquent areas. A two-phase evaluation process was adopted in a simulated small tumor resection adjacent to Broca's area. Phase I involved nine subjects without neurosurgical training in performing spatial judgment tasks. In Phase II, three surgeons were involved in assessing the effectiveness of the AR-neuronavigator in performing brain tumor targeting on a patient-specific head phantom. Phase I revealed the ability of the AR scene to evoke depth perception under different visualization modalities. Phase II confirmed the potentialities of the AR-neuronavigator in aiding the determination of the optimal surgical access to the surgical target. The AR-neuronavigator is intuitive, easy-to-use, and provides three-dimensional augmented information in a perceptually-correct way. The system proved to be effective in guiding skin incision, craniotomy, and lesion targeting. The preliminary results encourage a structured study to prove clinical effectiveness. Moreover, our testing platform might be used to facilitate training in brain tumour resection procedures.
Madiedo, J. M.; Trigo-Rodriguez, J. M.; Lyytinen, E.
The SPanish Meteor Network (SPMN) is performing a continuous monitoring of meteor activity over Spain and neighbouring countries. The huge amount of data obtained by the 25 video observing stations that this network is currently operating made it necessary to develop new software packages to accomplish some tasks, such as data reduction and remote operation of autonomous systems based on high-sensitivity CCD video devices. The main characteristics of this software are described here.
Ellis, Stephen R.; Adelstein, Bernard D.
System response latency is a prominent characteristic of human-computer interaction. Laggy systems are; however, not simply annoying but substantially reduce user productivity. The impact of latency on head referenced display systems, particularly head-mounted systems, is especially disturbing since not only can it interfere with dynamic registration in augmented reality displays but it also can in some cases indirectly contribute to motion sickness. We will summarize several experiments using standard psychophysical discrimination techniques that suggest what system latencies will be required to achieve perceptual stability for spatially referenced computer-generated imagery. In conclusion I will speculate about other system performance characteristics that I would hope to have for a dream augmented reality system.
Known as a Graphic Server, the system presented was designed for the control ground segment of the Telecom 2 satellites. It is a tool used to dynamically display telemetry data within graphic pages, also known as views. The views are created off-line through various utilities and then, on the operator's request, displayed and animated in real time as data is received. The system was designed as an independent component, and is installed in different Telecom 2 operational control centers. It enables operators to monitor changes in the platform and satellite payloads in real time. It has been in operation since December 1991.
Betancur, J. Alejandro; Osorio-Gomez, Gilberto; Agudelo, J. David
Currently, in the automotive industry the interaction between drivers and Augmented Reality (AR) systems is a subject of analysis, especially the identification of advantages and risks that this kind of interaction represents. Consequently, this paper attempts to put in evidence the potential applications of Head-Up (Display (HUD) and Head-Down Display (HDD) systems in automotive vehicles, showing applications and trends under study. In general, automotive advances related to AR devices suggest the partial integration of the HUD and HDD in automobiles; however, the right way to do it is still a moot point.
Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu
For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.
Liu, Huajie; Wang, Jianbang; Song, S
Oligonucleotide-based technologies for biosensing or bio-regulation produce huge amounts of rich high-dimensional information. There is a consequent need for flexible means to combine diverse pieces of such information to form useful derivative outputs, and to display those immediately. Here we...... demonstrate this capability in a DNA-based system that takes two input numbers, represented in DNA strands, and returns the result of their multiplication, writing this as a number in a display. Unlike a conventional calculator, this system operates by selecting the result from a library of solutions rather...
Rorie, Conrad; Fern, Lisa; Roberts, Zach; Monk, Kevin; Santiago, Confesor; Shively, Jay
The full integration of Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS), a prerequisite for enabling a broad range of public and commercial UAS operations, presents several technical challenges to UAS developers, operators and regulators. A primary barrier is the inability for UAS pilots (situated at a ground control station, or GCS) to comply with Title 14 Code of Federal Regulations sections 91.111 and 91.113, which require pilots to see and avoid other aircraft in order to maintain well clear. The present study is the final in a series of human-in-the-loop experiments designed to explore and test the various display and alerting requirements being incorporated into the minimum operational performance standards (MOPS) for a UAS-specific detect and avoid system that would replace the see and avoid function required of manned aircraft. Two display configurations were tested - an integrated display and a standalone display - and their impact on pilot response times and ability to maintain DAA well clear were compared. Results indicated that the current draft of the MOPS result in high-level performance and did not meaningfully differ by display configuration.
Chen Homer H
Full Text Available The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.
Parihar, Vijay; Yadav, Y R; Kher, Yatin; Ratre, Shailendra; Sethi, Ashish; Sharma, Dhananjaya
Steep learning curve is found initially in pure endoscopic procedures. Video telescopic operating monitor (VITOM) is an advance in rigid-lens telescope systems provides an alternative method for learning basics of neuroendoscopy with the help of the familiar principle of microneurosurgery. The aim was to evaluate the clinical utility of VITOM as a learning tool for neuroendoscopy. Video telescopic operating monitor was used 39 cranial and spinal procedures and its utility as a tool for minimally invasive neurosurgery and neuroendoscopy for initial learning curve was studied. Video telescopic operating monitor was used in 25 cranial and 14 spinal procedures. Image quality is comparable to endoscope and microscope. Surgeons comfort improved with VITOM. Frequent repositioning of scope holder and lack of stereopsis is initial limiting factor was compensated for with repeated procedures. Video telescopic operating monitor is found useful to reduce initial learning curve of neuroendoscopy.
渡部, 和雄; 湯瀬, 裕昭; 渡邉, 貴之; 井口, 真彦; 藤田, 広一
The authors have developed a distance education system for interactive education which can transmit 4 video streams between distant lecture rooms. In this paper, we describe the results of our experiments using the system for adult education. We propose some efficient ways to use the system for adult education.
Matsukura, Haruka; Yoneda, Tatsuhiro; Ishida, Hiroshi
We propose a new olfactory display system that can generate an odor distribution on a two-dimensional display screen. The proposed system has four fans on the four corners of the screen. The airflows that are generated by these fans collide multiple times to create an airflow that is directed towards the user from a certain position on the screen. By introducing odor vapor into the airflows, the odor distribution is as if an odor source had been placed onto the screen. The generated odor distribution leads the user to perceive the odor as emanating from a specific region of the screen. The position of this virtual odor source can be shifted to an arbitrary position on the screen by adjusting the balance of the airflows from the four fans. Most users do not immediately notice the odor presentation mechanism of the proposed olfactory display system because the airflow and perceived odor come from the display screen rather than the fans. The airflow velocity can even be set below the threshold for airflow sensation, such that the odor alone is perceived by the user. We present experimental results that show the airflow field and odor distribution that are generated by the proposed system. We also report sensory test results to show how the generated odor distribution is perceived by the user and the issues that must be considered in odor presentation.
Cheng, Tsung-Sheng; Lu, Yu-Chun; Yang, Chu-Sing
Multimedia plays a vital role in both learning systems and the actual education process. However, currently used presentation software is often not optimized and generates a great deal of clutter on the screen. Furthermore, there is often insufficient space on a single display, leading to the division of content. These limitations generally…
... 6,122,592 (``the '592 patent''). The complaint named Garmin International, Inc. of Olathe, Kansas... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION In the Matter of Certain Multimedia Display and Navigation Devices and Systems, Components Thereof...
Full Text Available Navigating with Electronic Chart Display and Information Systems(ECDIS is fundamentally different from navigating with paper charts. The paper is addressed to model course on training in the operational use of ECDIS. It presents problems related to the risk of over reliance on ECDIS.
In the past decade, the display format from (HD High Definition) through Full HD(1920X1080) to UHD(4kX2k), mainly guides display industry to two directions: one is liquid crystal display(LCD) from 10 inch to 100 inch and more, and the other is projector. Although LCD has been popularly used in market; however, the investment for production such kind displays cost more money expenditure, and less consideration of environmental pollution and protection. The Projection system may be considered, due to more viewing access, flexible in location, energy saving and environmental protection issues. The topic is to design and fabricate a short throw factor liquid crystal on silicon (LCoS) projection system for cinema. It provides a projection lens system, including a tele-centric lens fitted for emitted LCoS to collimate light to enlarge the field angle. Then, the optical path is guided by a symmetric lens. Light of LCoS may pass through the lens, hit on and reflect through an aspherical mirror, to form a less distortion image on blank wall or screen for home cinema. The throw ratio is less than 0.33.
Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki
In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.
An experimental investigation made to determine the depth cue of a head movement perspective and image intensity as a function of depth is summarized. The experiment was based on the use of a hybrid computer generated contact analog visual display in which various perceptual depth cues are included on a two dimensional CRT screen. The system's purpose was to impart information, in an integrated and visually compelling fashion, about the vehicle's position and orientation in space. Results show head movement gives a 40% improvement in depth discrimination when the display is between 40 and 100 cm from the subject; intensity variation resulted in as much improvement as head movement.
Dufaux, Frederic; Cagnazzo, Marco
With the expectation of greatly enhanced user experience, 3D video is widely perceived as the next major advancement in video technology. In order to fulfil the expectation of enhanced user experience, 3D video calls for new technologies addressing efficient content creation, representation/coding, transmission and display. Emerging Technologies for 3D Video will deal with all aspects involved in 3D video systems and services, including content acquisition and creation, data representation and coding, transmission, view synthesis, rendering, display technologies, human percepti
Wang, Zhou; Zeng, Kai; Rehman, Abdul; Yeganeh, Hojatollah; Wang, Shiqi
How to deliver videos to consumers over the network for optimal quality-of-experience (QoE) has been the central goal of modern video delivery services. Surprisingly, regardless of the large volume of videos being delivered everyday through various systems attempting to improve visual QoE, the actual QoE of end consumers is not properly assessed, not to say using QoE as the key factor in making critical decisions at the video hosting, network and receiving sites. Real-world video streaming systems typically use bitrate as the main video presentation quality indicator, but using the same bitrate to encode different video content could result in drastically different visual QoE, which is further affected by the display device and viewing condition of each individual consumer who receives the video. To correct this, we have to put QoE back to the driver's seat and redesign the video delivery systems. To achieve this goal, a major challenge is to find an objective video presentation QoE predictor that is accurate, fast, easy-to-use, display device adaptive, and provides meaningful QoE predictions across resolution and content. We propose to use the newly developed SSIMplus index (https://ece.uwaterloo.ca/~z70wang/research/ssimplus/) for this role. We demonstrate that based on SSIMplus, one can develop a smart adaptive video streaming strategy that leads to much smoother visual QoE impossible to achieve using existing adaptive bitrate video streaming approaches. Furthermore, SSIMplus finds many more applications, in live and file-based quality monitoring, in benchmarking video encoders and transcoders, and in guiding network resource allocations.
Yao, Yi; Liu, Xu; Lin, Yuanfang; Zhang, Huangzhu; Zhang, Xiaojie; Liu, Xiangdong
Since present display technology is projecting 3D to 2D, people's eyes are deceived by the loss of spatial data. So it's a revolution for human vision to develop a real 3D display device. The monitor is based on emissive pad with 64*256 LED array. When rotated at a frequency of 10 Hertz, it shows real 3D images with pixels at their exact positions. The article presents a procedure that the software possesses 3D object and converts to volumetric 3D formatted data for this system. For simulating the phenomenon on PC, it also presents a program remodels the object based on OpenGL. An algorithm for faster processing and optimizing rendering speed is also given. The monitor provides real 3D scenes with free visual angle. It can be expected that the revolution will bring a strike on modern monitors and will lead to a new world for display technology.
Watanabe, Junpei; Ishikawa, Hiroaki; Arouette, Xavier; Matsumoto, Yasuaki; Miki, Norihisa
In this paper, we present a vibrational Braille code display with large-displacement micro-electro-mechanical systems (MEMS) actuator arrays. Tactile receptors are more sensitive to vibrational stimuli than to static ones. Therefore, when each cell of the Braille code vibrates at optimal frequencies, subjects can recognize the codes more efficiently. We fabricated a vibrational Braille code display that used actuators consisting of piezoelectric actuators and a hydraulic displacement amplification mechanism (HDAM) as cells. The HDAM that encapsulated incompressible liquids in microchambers with two flexible polymer membranes could amplify the displacement of the MEMS actuator. We investigated the voltage required for subjects to recognize Braille codes when each cell, i.e., the large-displacement MEMS actuator, vibrated at various frequencies. Lower voltages were required at vibration frequencies higher than 50 Hz than at vibration frequencies lower than 50 Hz, which verified that the proposed vibrational Braille code display is efficient by successfully exploiting the characteristics of human tactile receptors.
Potter, Ray; Roberts, Deborah
This guide aims to provide an introduction to Desktop Video Conferencing. You may be familiar with video conferencing, where participants typically book a designated conference room and communicate with another group in a similar room on another site via a large screen display. Desktop video conferencing (DVC), as the name suggests, allows users to video conference from the comfort of their own office, workplace or home via a desktop/laptop Personal Computer. DVC provides live audio and visua...
Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.
Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also
Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan
To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
Smith, Jemma; Hand, Linda; Dowrick, Peter W.
This study examined the efficacy of video self modeling (VSM) using feedforward, to teach various goals of a picture exchange communication system (PECS). The participants were two boys with autism and one man with Down syndrome. All three participants were non-verbal with no current functional system of communication; the two children had long…
Dow, Ximeng Y; Sullivan, Shane Z; Muir, Ryan D; Simpson, Garth J
A fast (up to video rate) two-photon excited fluorescence lifetime imaging system based on interleaved digitization is demonstrated. The system is compatible with existing beam-scanning microscopes with minor electronics and software modification. Proof-of-concept demonstrations were performed using laser dyes and biological tissue.
Walpitagama, Milanga; Kaslin, Jan; Nugegoda, Dayanthi; Wlodkowic, Donald
The fish embryo toxicity (FET) biotest performed on embryos of zebrafish (Danio rerio) has gained significant popularity as a rapid and inexpensive alternative approach in chemical hazard and risk assessment. The FET was designed to evaluate acute toxicity on embryonic stages of fish exposed to the test chemical. The current standard, similar to most traditional methods for evaluating aquatic toxicity provides, however, little understanding of effects of environmentally relevant concentrations of chemical stressors. We postulate that significant environmental effects such as altered motor functions, physiological alterations reflected in heart rate, effects on development and reproduction can occur at sub-lethal concentrations well below than LC10. Behavioral studies can, therefore, provide a valuable integrative link between physiological and ecological effects. Despite the advantages of behavioral analysis development of behavioral toxicity, biotests is greatly hampered by the lack of dedicated laboratory automation, in particular, user-friendly and automated video microscopy systems. In this work we present a proof-of-concept development of an optical system capable of tracking embryonic vertebrates behavioral responses using automated and vastly miniaturized time-resolved video-microscopy. We have employed miniaturized CMOS cameras to perform high definition video recording and analysis of earliest vertebrate behavioral responses. The main objective was to develop a biocompatible embryo positioning structures that were suitable for high-throughput imaging as well as video capture and video analysis algorithms. This system should support the development of sub-lethal and behavioral markers for accelerated environmental monitoring.
Allin, T.; Neubert, T.; Laursen, S.; Rasmussen, I. L.; Soula, S.
In support for global ELF/VLF observations, HF measurements in France, and conjugate photometry/VLF observations in South Africa, we developed and operated a semi-automatic, remotely controlled video system for the observation of middle-atmospheric transient luminous events (TLEs). Installed at the Pic du Midi Observatory in Southern France, the system was operational during the period from July 18 to September 15, 2003. The video system, based two low-light, non-intensified CCD video cameras, was mounted on top of a motorized pan/tilt unit. The cameras and the pan/tilt unit were controlled over serial links from a local computer, and the video outputs were distributed to a pair of PCI frame grabbers in the computer. This setup allowed remote users to log in and operate the system over the internet. Event detection software provided means of recording and time-stamping single TLE video fields and thus eliminated the need for continuous human monitoring of TLE activity. The computer recorded and analyzed two parallel video streams at the full 50 Hz field rate, while uploading status images, TLE images, and system logs to a remote web server. The system detected more than 130 TLEs - mostly sprites - distributed over 9 active evenings. We have thus demonstrated the feasibility of remote agents for TLE observations, which are likely to find use in future ground-based TLE observation campaigns, or to be installed at remote sites in support for space-borne or other global TLE observation efforts.
Chan, Fai; Moon, Yiu-Sang; Chen, Jiansheng; Ma, Yiu-Kwan; Tsang, Wai-Hung; Fu, Kah-Kuen
Low resolution and un-sharp facial images are always captured from surveillance videos because of long human-camera distance and human movements. Previous works addressed this problem by using an active camera to capture close-up facial images without considering human movements and mechanical delays of the active camera. In this paper, we proposed a unified framework to capture facial images in video surveillance systems by using one static and active camera in a cooperative manner. Human faces are first located by a skin-color based real-time face detection algorithm. A stereo camera model is also employed to approximate human face location and his/her velocity with respect to the active camera. Given the mechanical delays of the active camera, the position of a target face with a given delay can be estimated using a Human-Camera Synchronization Model. By controlling the active camera with corresponding amount of pan, tilt, and zoom, a clear close-up facial image of a moving human can be captured then. We built the proposed system in an 8.4-meter indoor corridor. Results show that the proposed stereo camera configuration can locate faces with average error of 3%. In addition, it is capable of capturing facial images of a walking human clearly in first instance in 90% of the test cases.
Agour, Mostafa; Falldorf, Claas; Bergmann, Ralf B
We present a new method for the generation of a dynamic wave field with high space bandwidth product (SBP). The dynamic wave field is generated from several wave fields diffracted by a display which comprises multiple spatial light modulators (SLMs) each having a comparably low SBP. In contrast to similar approaches in stereoscopy, we describe how the independently generated wave fields can be coherently superposed. A major benefit of the scheme is that the display system may be extended to provide an even larger display. A compact experimental configuration which is composed of four phase-only SLMs to realize the coherent combination of independent wave fields is presented. Effects of important technical parameters of the display system on the wave field generated across the observation plane are investigated. These effects include, e.g., the tilt of the individual SLM and the gap between the active areas of multiple SLMs. As an example of application, holographic reconstruction of a 3D object with parallax effects is demonstrated.
O'Connell, Niamh; Madsen, Henrik; Andersen, Philip Hvidthøft Delff
In this paper we propose a method for identifying and validating a model of the heat dynamics of a supermarket refrigeration display case for the purpose of advanced control. The model is established to facilitate the development of novel model-based control techniques for individual display units...... in a supermarket refrigeration system. The grey-box modelling approach is adopted, using stochastic differential equations to define the dynamics of the model, combining prior knowledge of the physical system with data-driven modelling. Model identification is performed using the forward selection method......, and the performance of candidate models is evaluated through cross-validation.The model developed in this work uses operational data from a small Danish supermarket. A three-state model is determined to be most appropriate for describing the dynamics of this system. Advanced local control employing the identified...
Feng, Q.; Sang, X.; Yu, X.; Gao, X.; Wang, P.; Li, C.; Zhao, T.
A novel auto-stereoscopic three-dimensional (3D) projection display system based on the frontal projection lenticular screen is demonstrated. It can provide high real 3D experiences and the freedom of interaction. In the demonstrated system, the content can be changed and the dense of viewing points can be freely adjusted according to the viewers' demand. The high dense viewing points can provide smooth motion parallax and larger image depth without blurry. The basic principle of stereoscopic display is described firstly. Then, design architectures including hardware and software are demonstrated. The system consists of a frontal projection lenticular screen, an optimally designed projector-array and a set of multi-channel image processors. The parameters of the frontal projection lenticular screen are based on the demand of viewing such as the viewing distance and the width of view zones. Each projector is arranged on an adjustable platform. The set of multi-channel image processors are made up of six PCs. One of them is used as the main controller, the other five client PCs can process 30 channel signals and transmit them to the projector-array. Then a natural 3D scene will be perceived based on the frontal projection lenticular screen with more than 1.5 m image depth in real time. The control section is presented in detail, including parallax adjustment, system synchronization, distortion correction, etc. Experimental results demonstrate the effectiveness of this novel controllable 3D display system.
Lee, June; Yoon, Seo Young; Lee, Chung Hyun
The purposes of the study are to investigate CHLS (Cyber Home Learning System) in online video conferencing environment in primary school level and to explore the students' responses on CHLS-VC (Cyber Home Learning System through Video Conferencing) in order to explore the possibility of using CHLS-VC as a supportive online learning system. The…
Full Text Available About the video image processing's vehicle detection and counting system research, which has video vehicle detection, vehicle targets' image processing, and vehicle counting function. Vehicle detection is the use of inter-frame difference method and vehicle shadow segmentation techniques for vehicle testing. Image processing functions is the use of color image gray processing, image segmentation, mathematical morphology analysis and image fills, etc. on target detection to be processed, and then the target vehicle extraction. Counting function is to count the detected vehicle. The system is the use of inter-frame video difference method to detect vehicle and the use of the method of adding frame to vehicle and boundary comparison method to complete the counting function, with high recognition rate, fast, and easy operation. The purpose of this paper is to enhance traffic management modernization and automation levels. According to this study, it can provide a reference for the future development of related applications.
Full Text Available This essay examines how tensions between work and play for video game developers shape the worlds they create. The worlds of game developers, whose daily activity is linked to larger systems of experimentation and technoscientific practice, provide insights that transcend video game development work. The essay draws on ethnographic material from over 3 years of fieldwork with video game developers in the United States and India. It develops the notion of creative collaborative practice based on work in the fields of science and technology studies, game studies, and media studies. The importance of, the desire for, or the drive to understand underlying systems and structures has become fundamental to creative collaborative practice. I argue that the daily activity of game development embodies skills fundamental to creative collaborative practice and that these capabilities represent fundamental aspects of critical thought. Simultaneously, numerous interests have begun to intervene in ways that endanger these foundations of creative collaborative practice.
A. L. Oleinik
Full Text Available Subject of Research. The paper deals with the problem of multiple face tracking in a video stream. The primary application of the implemented tracking system is the automatic video surveillance. The particular operating conditions of surveillance cameras are taken into account in order to increase the efficiency of the system in comparison to existing general-purpose analogs. Method. The developed system is comprised of two subsystems: detector and tracker. The tracking subsystem does not depend on the detector, and thus various face detection methods can be used. Furthermore, only a small portion of frames is processed by the detector in this structure, substantially improving the operation rate. The tracking algorithm is based on BRIEF binary descriptors that are computed very efficiently on modern processor architectures. Main Results. The system is implemented in C++ and the experiments on the processing rate and quality evaluation are carried out. MOTA and MOTP metrics are used for tracking quality measurement. The experiments demonstrated the four-fold processing rate gain in comparison to the baseline implementation that processes every video frame with the detector. The tracking quality is on the adequate level when compared to the baseline. Practical Relevance. The developed system can be used with various face detectors (including slow ones to create a fully functional high-speed multiple face tracking solution. The algorithm is easy to implement and optimize, so it may be applied not only in full-scale video surveillance systems, but also in embedded solutions integrated directly into cameras.
Sehairi, Kamal; Chouireb, Fatima; Meunier, Jean
The objective of this study is to compare several change detection methods for a monostatic camera and identify the best method for different complex environments and backgrounds in indoor and outdoor scenes. To this end, we used the CDnet video dataset as a benchmark that consists of many challenging problems, ranging from basic simple scenes to complex scenes affected by bad weather and dynamic backgrounds. Twelve change detection methods, ranging from simple temporal differencing to more sophisticated methods, were tested and several performance metrics were used to precisely evaluate the results. Because most of the considered methods have not previously been evaluated on this recent large scale dataset, this work compares these methods to fill a lack in the literature, and thus this evaluation joins as complementary compared with the previous comparative evaluations. Our experimental results show that there is no perfect method for all challenging cases; each method performs well in certain cases and fails in others. However, this study enables the user to identify the most suitable method for his or her needs.
Geradts, Zeno J.; Merlijn, Menno; de Groot, Gert; Bijhold, Jurrien
The gait parameters of eleven subjects were evaluated to provide data for recognition purposes of subjects. Video images of these subjects were acquired in frontal, transversal, and sagittal (a plane parallel to the median of the body) view. The subjects walked by at their usual walking speed. The measured parameters were hip, knee and ankle joint angle and their time averaged values, thigh, foot and trunk angle, step length and width, cycle time and walking speed. Correlation coefficients within and between subjects for the hip, knee and ankle rotation pattern in the sagittal aspect and for the trunk rotation pattern in the transversal aspect were almost similar. (were similar or were almost identical) This implies that the intra and inter individual variance were equal. Therefore, these gait parameters could not distinguish between subjects. A simple ANOVA with a follow-up test was used to detect significant differences for the mean hip, knee and ankle joint angle, thigh angle, step length, step width, walking speed, cycle time and foot angle. The number of significant differences between subjects defined the usefulness of the gait parameter. The parameter with the most significant difference between subjects was the foot angle (64 % - 73 % of the maximal attainable significant differences), followed by the time average hip joint angle (58 %) and the step length (45 %). The other parameters scored less than 25 %, which is poor for recognition purposes. The use of gait for identification purposes it not yet possible based on this research.
Houze, Robert A., Jr.; Biggerstaff, M. I.; Rutledge, S. A.; Smull, B. F.
The utility of color displays of Doppler-radar data in revealing real-time kinematic information has been demonstrated in past studies, especially for extratropical cyclones and severe thunderstorms. Such displays can also indicate aspects of the circulation within a certain type of mesoscale convective system-the squall line with trailing "stratiform" rain. Displays from a single Doppler radar collected in two squall-line storms observed during the Oklahoma-Kansas PRE-STORM project conducted in May and June 1985 reveal mesoscale-flow patterns in the stratiform rain region of the squall line, such as front-to-rear storm-relative flow at upper levels, a subsiding storm-relative rear inflow at middle and low levels, and low-level divergent flow associated with strong mesoscale subsidence. "Dual-Doppler" analysis further illustrates these mesoscale-flow features and, in addition, shows the structure of the convective region within the squall line and a mesoscale vortex in the "stratiform" region trailing the line. A refined conceptual model of this type of mesoscale convective system is presented based on previous studies and observations reported here.Recognition of "single-Doppler-radar" patterns of the type described in this paper, together with awareness of the conceptual model, should aid in the identification and interpretation of this type of mesoscale system at future NEXRAD sites. The dual-Doppler results presented here further indicate the utility of multiple-Doppler observations of mesoscale convective systems in the STORM program.
Rodríguez-Pardo, Carlos Eduardo; Sharma, Gaurav; Feng, Xiao-Fan; Speigle, Jon; Sezan, Ibrahim
Primary selection plays a fundamental role in display design. Primaries affect not only the gamut of colors the systems is able to reproduce, but also, they have an impact on the power consumption and other cost related variables. Using more than the traditional three primaries has been shown to be a versatile way of extending the color gamut, widening the angle view of LCD screens and improving power consumption of displays systems. Adequate selection of primaries requires a trade-off between the multiple benefits the system offers, the costs and the complexity it implies, among other design parameters. The purpose of this work is to present a methodology for optimal design for three primary and multiprimary display systems. We consider the gamut in perceptual spaces, which offer the advantage of an evaluation that correlates with human perception, and determine a design that maximize the gamut volume, constrained to a certain power budget, and analyze the benefits of increasing number of primaries, and their effect on other variables of performance like gamut coverage.
Azer, Samy A; Algrain, Hala A; AlKhelaif, Rana A; AlEshaiwi, Sarah M
A number of studies have evaluated the educational contents of videos on YouTube. However, little analysis has been done on videos about physical examination. This study aimed to analyze YouTube videos about physical examination of the cardiovascular and respiratory systems. It was hypothesized that the educational standards of videos on YouTube would vary significantly. During the period from November 2, 2011 to December 2, 2011, YouTube was searched by three assessors for videos covering the clinical examination of the cardiovascular and respiratory systems. For each video, the following information was collected: title, authors, duration, number of viewers, and total number of days on YouTube. Using criteria comprising content, technical authority, and pedagogy parameters, videos were rated independently by three assessors and grouped into educationally useful and non-useful videos. A total of 1920 videos were screened. Only relevant videos covering the examination of adults in the English language were identified (n=56). Of these, 20 were found to be relevant to cardiovascular examinations and 36 to respiratory examinations. Further analysis revealed that 9 provided useful information on cardiovascular examinations and 7 on respiratory examinations: scoring mean 14.9 (SD 0.33) and mean 15.0 (SD 0.00), respectively. The other videos, 11 covering cardiovascular and 29 on respiratory examinations, were not useful educationally, scoring mean 11.1 (SD 1.08) and mean 11.2 (SD 1.29), respectively. The differences between these two categories were significant (P.86. A small number of videos about physical examination of the cardiovascular and respiratory systems were identified as educationally useful; these videos can be used by medical students for independent learning and by clinical teachers as learning resources. The scoring system utilized by this study is simple, easy to apply, and could be used by other researchers on similar topics.
Yang, Jian; Xie, Xiaofang; Wang, Yan
Based on the AHRS (Attitude and Heading Reference System) and PTZ (Pan/Tilt/Zoom) camera, we designed a video monitoring and tracking system. The overall structure of the system and the software design are given. The key technologies such as serial port communication and head attitude tracking are introduced, and the codes of the key part are given.
Prinzel, Lawrence J., III; Ellis, Kyle E.; Arthur, Jarvis J.; Nicholas, Stephanie N.; Kiggins, Daniel
A Commercial Aviation Safety Team (CAST) study of 18 worldwide loss-of-control accidents and incidents determined that the lack of external visual references was associated with a flight crew's loss of attitude awareness or energy state awareness in 17 of these events. Therefore, CAST recommended development and implementation of virtual day-Visual Meteorological Condition (VMC) display systems, such as synthetic vision systems, which can promote flight crew attitude awareness similar to a day-VMC environment. This paper describes the results of a high-fidelity, large transport aircraft simulation experiment that evaluated virtual day-VMC displays and a "background attitude indicator" concept as an aid to pilots in recovery from unusual attitudes. Twelve commercial airline pilots performed multiple unusual attitude recoveries and both quantitative and qualitative dependent measures were collected. Experimental results and future research directions under this CAST initiative and the NASA "Technologies for Airplane State Awareness" research project are described.
Bulan, Orhan; Loce, Robert P.; Wu, Wencheng; Wang, YaoRong; Bernal, Edgar A.; Fan, Zhigang
Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.
Ignacio, Joselito; Center for Homeland Defense and Security Naval Postgraduate School
This proposed system process aims to improve subway safety through better enabling the rapid detection and response to a chemical release in a subway system. The process is designed to be location-independent and generalized to most subway systems despite each system's unique characteristics.
Qin, Jiayang; Wang, Xiuwen; Kong, Jian; Ma, Cuiqing; Xu, Ping
In this study, a food-grade cell surface display host/vector system for Lactobacillus casei was constructed. The food-grade host L. casei Q-5 was a lactose-deficient derivative of L. casei ATCC 334 obtained by plasmid elimination. The food-grade cell surface display vector was constructed based on safe DNA elements from lactic acid bacteria containing the following: pSH71 replicon from Lactococcus lactis, lactose metabolism genes from L. casei ATCC 334 as complementation markers, and surface layer protein gene from Lactobacillus acidophilus ATCC 4356 for cell surface display. The feasibility of the new host/vector system was verified by the expression of green fluorescent protein (GFP) on L. casei. Laser scanning confocal microscopy and immunofluorescence analysis using anti-GFP antibody confirmed that GFP was anchored on the surface of the recombinant cells. The stability of recombinant L. casei cells in artificial gastrointestinal conditions was verified, which is beneficial for oral vaccination applications. These results indicate that the food-grade host/vector system can be an excellent antigen delivery vehicle in oral vaccine construction. Copyright © 2014 Elsevier GmbH. All rights reserved.
Hatfield, Jack J.; Villarreal, Diana
The topic of advanced display and control technology is addressed along with the major objectives of this technology, the current state of the art, major accomplishments, research programs and facilities, future trends, technology issues, space transportation systems applications and projected technology readiness for those applications. The holes that may exist between the technology needs of the transportation systems versus the research that is currently under way are addressed, and cultural changes that might facilitate the incorporation of these advanced technologies into future space transportation systems are recommended. Some of the objectives are to reduce life cycle costs, improve reliability and fault tolerance, use of standards for the incorporation of advancing technology, and reduction of weight, volume and power. Pilot workload can be reduced and the pilot's situational awareness can be improved, which would result in improved flight safety and operating efficiency. This could be accomplished through the use of integrated, electronic pictorial displays, consolidated controls, artificial intelligence, and human centered automation tools. The Orbiter Glass Cockpit Display is an example examined.
Huang, Kuo-Chung; Yang, Jinn-Cherng; Wu, Chou-Lin; Lee, Kuen; Hwang, Sheue-Ling
The ghost image induced by System-Crosstalk (SCT) of 3D display, due to optical hardware imperfections, is the major factor to jeopardize stereopsis. The system crosstalk can be measured by optical measuring instrument and describes the optical leakage from the neighboring viewing zones. The amount of crosstalk reduces the ability of the viewer to fuse the stereo-images into 3D images. The Viewer-Crosstalk (VCT), combined with hardware and content issues, is an overall evaluation of the ghost image and can be easily interpreted based on the principle of binocular 3D display. The examination of different SCT values was carried out with a seven-grade subjective evaluation test. In our previous study, it was shown that many other factors, such as contrast ratio, disparity and monocular cues of the images, play important roles in the stereopsis. In this paper, we study the factors of stereo-images with different crosstalk levels that may affect stereopsis. For simulate the interference between stereo-images, digital image processing are employed to assign different levels of crosstalk to each other at properly specified disparity between images. Results of this research can provide valuable reference to the content makers and for the optimized design of 3D displays with minimum System Crosstalk.
Sun, Ted X.; Cheng, Botao
In this paper, Sun Innovations demonstrates an innovative emissive projection display (EPD) system. It is comprised of a fully transparent fluorescent screen with a UV image projector. The screen can be applied to glass windows or windshield, without affecting visible light transmission. The UV projector can be based on either a DLP (digital light processor) or a laser scanner display engine. For a DLP based projector, a discharge lamp coupled to a set of UV filters can be applied to generate a full color video image on the transparent screen. UV or blue-ray laser diodes of different wavelengths can be combined with scanning mirrors to generate a vector display for full windshield display applications. This display combines the best of both worlds of conventional projection and emissive display technologies. Like a projection display, the screen has no pixel structure and can be manufactured roll to roll; the display is scalable. Like an emissive display (e.g. plasma or CRT), the quality of the image is superior, with very large viewing angles. It also offers some unique features. For example, in addition to a fully transparent display on windows or windshields, it can be applied to a black substrate to create the first front projection display on true "black" screen that has superior image contrast at low projection power. This fundamentally new display platform can enable multiple major commercial applications that can not be addressed by any of the existing display technologies.
Hua, My; Yip, Henry; Talbot, Prue
The objective was to analyse and compare puff and exhalation duration for individuals using electronic nicotine delivery systems (ENDS) and conventional cigarettes in YouTube videos. Video data from YouTube videos were analysed to quantify puff duration and exhalation duration during use of conventional tobacco-containing cigarettes and ENDS. For ENDS, comparisons were also made between 'advertisers' and 'non-advertisers', genders, brands of ENDS, and models of ENDS within one brand. Puff duration (mean =2.4 s) for conventional smokers in YouTube videos (N=9) agreed well with prior publications. Puff duration was significantly longer for ENDS users (mean =4.3 s) (N = 64) than for conventional cigarette users, and puff duration varied significantly among ENDS brands. For ENDS users, puff duration and exhalation duration were not significantly affected by 'advertiser' status, gender or variation in models within a brand. Men outnumbered women by about 5:1, and most users were between 19 and 35 years of age. YouTube videos provide a valuable resource for studying ENDS usage. Longer puff duration may help ENDS users compensate for the apparently poor delivery of nicotine from ENDS. As with conventional cigarette smoking, ENDS users showed a large variation in puff duration (range =1.9-8.3 s). ENDS puff duration should be considered when designing laboratory and clinical trials and in developing a standard protocol for evaluating ENDS performance.
Magic Lantern and Honeywell FM and T worked together to develop lower-cost, visible light solid-state laser sources to use in laser projector products. Work included a new family of video displays that use lasers as light sources. The displays would project electronic images up to 15 meters across and provide better resolution and clarity than movie film, up to five times the resolution of the best available computer monitors, up to 20 times the resolution of television, and up to six times the resolution of HDTV displays. The products that could be developed as a result of this CRADA could benefit the economy in many ways, such as: (1) Direct economic impact in the local manufacture and marketing of the units. (2) Direct economic impact in exports and foreign distribution. (3) Influencing the development of other elements of display technology that take advantage of the signals that these elements allow. (4) Increased productivity for engineers, FAA controllers, medical practitioners, and military operatives.
Cheah Wai Shiang
Full Text Available Agent-oriented methodology (AOM is a comprehensive and unified agent methodology for agent-oriented software development. Although AOM is claimed to be able to cope with a complex system development, it is still not yet determined up to what extent this may be true. Therefore, it is vital to conduct an investigation to validate this methodology. This paper presents the adoption of AOM in developing an agent-oriented video surveillance system (VSS. An intruder handling scenario is designed and implemented through AOM. AOM provides an alternative method to engineer a distributed security system in a systematic manner. It presents the security system at a holistic view; provides a better conceptualization of agent-oriented security system and supports rapid prototyping as well as simulation of video surveillance system.
Burner, A. W.; Rummler, D. R.; Goad, W. K.
A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Controllers; Investigations: Terminations, Modifications and Rulings AGENCY: U.S. International Trade Commission. ACTION: Notice. Section 337 of the Tariff Act of 1930...
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Wireless Controllers and Components Thereof, Commission Determination Finding No Violation of the Tariff Act of 1930 AGENCY: U.S. International Trade Commission. ACTION...
Horn, Eva; And Others
Three nonvocal students (ages 5-8) with severe physical handicaps were trained in scan and selection responses (similar to responses needed for operating augmentative communication systems) using a microcomputer-operated video-game format. Results indicated that all three children showed substantial increases in the number of correct responses and…
Pope, Alan T.; Bogart, Edward H.
Describes the Extended Attention Span Training (EAST) system for modifying attention deficits, which takes the concept of biofeedback one step further by making a video game more difficult as the player's brain waves indicate that attention is waning. Notes contributions of this technology to neuropsychology and neurology, where the emphasis is on…
Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C.
The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds’ benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system’s high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.
The author demonstrates a new system useful for reflective learning. Our new system offers an environment that one can use handwriting tablet devices to bookmark symbolic and descriptive feedbacks into simultaneously recorded videos in the environment. If one uses video recording and feedback check sheets in reflective learning sessions, one can…
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Video Game Systems and Wireless Controllers and Components Thereof; Notice of Request for... limited exclusion order and a cease and desist order against certain video game systems and wireless...
Gilliland, M. G.; Rougelot, R. S.; Schumaker, R. A.
Video signal processor uses special-purpose integrated circuits with nonsaturating current mode switching to accept texture and color information from a digital computer in a visual spaceflight simulator and to combine these, for display on color CRT with analog information concerning fading.
AKINCI, Gökay; Polat, Ediz; Koçak, Orhan Murat
Eye pupil detection systems have become increasingly popular in image processing and computer vision applications in medical systems. In this study, a video-based eye pupil detection system is developed for diagnosing bipolar disorder. Bipolar disorder is a condition in which people experience changes in cognitive processes and abilities, including reduced attentional and executive capabilities and impaired memory. In order to detect these abnormal behaviors, a number of neuropsychologi...
Cihak, David; Fahrenkrog, Cynthia; Ayres, Kevin M.; Smith, Catherine
This study evaluated the efficacy of video modeling delivered via a handheld device (video iPod) and the use of the system of least prompts to assist elementary-age students with transitioning between locations and activities within the school. Four students with autism learned to manipulate a handheld device to watch video models. An ABAB…
Campo, E. M.; Roig, J.; Roeder, B.; Wenn, D.; Mamojka, B.; Omastova, M.; Terentjev, E. M.; Esteve, J.
For over a decade, special emphasis has been placed in the convergence of different fields of science and technology, in an effort to serve human needs by way of enhancing human capabilities. The convergence of the Nano-Bio-Info-Cogni (NBIC) quartet will provide unique solutions to specific needs. This is the case of, Nano-opto mechanical Systems (NOMS), presented as a solution to tactile perception, both for the visually-impaired and for the general public. NOMS, based on photoactive polymer actuators and devices, is a much sought-after technology. In this scheme, light sources promote mechanical actuation producing a variety of nano-opto mechanical systems such as nano-grippers. In this paper, we will provide a series of specifications that the NOMS team is targeting towards the development of a tactile display using optically-activated smart materials. Indeed, tactile displays remain mainly mechanical, compromising reload speeds and resolution which inhibit 3D tactile representation of web interfaces. We will also discuss how advantageous NOMS tactile displays could be for the general public. Tactile processing based on stimulation delivered through the NOMS tablet, will be tested using neuropsychology methods, in particular event-related brain potentials. Additionally, the NOMS tablet will be instrumental to the development of basic neuroscience research.
Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen
Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.
Ebe, Kazuyu, E-mail: firstname.lastname@example.org; Tokuyama, Katsuichi; Baba, Ryuta; Ogihara, Yoshisada; Ichikawa, Kosuke; Toyama, Joji [Joetsu General Hospital, 616 Daido-Fukuda, Joetsu-shi, Niigata 943-8507 (Japan); Sugimoto, Satoru [Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo 113-8421 (Japan); Utsunomiya, Satoru; Kagamu, Hiroshi; Aoyama, Hidefumi [Graduate School of Medical and Dental Sciences, Niigata University, Niigata 951-8510 (Japan); Court, Laurence [The University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States)
Purpose: To develop and evaluate a new video image-based QA system, including in-house software, that can display a tracking state visually and quantify the positional accuracy of dynamic tumor tracking irradiation in the Vero4DRT system. Methods: Sixteen trajectories in six patients with pulmonary cancer were obtained with the ExacTrac in the Vero4DRT system. Motion data in the cranio–caudal direction (Y direction) were used as the input for a programmable motion table (Quasar). A target phantom was placed on the motion table, which was placed on the 2D ionization chamber array (MatriXX). Then, the 4D modeling procedure was performed on the target phantom during a reproduction of the patient’s tumor motion. A substitute target with the patient’s tumor motion was irradiated with 6-MV x-rays under the surrogate infrared system. The 2D dose images obtained from the MatriXX (33 frames/s; 40 s) were exported to in-house video-image analyzing software. The absolute differences in the Y direction between the center of the exposed target and the center of the exposed field were calculated. Positional errors were observed. The authors’ QA results were compared to 4D modeling function errors and gimbal motion errors obtained from log analyses in the ExacTrac to verify the accuracy of their QA system. The patients’ tumor motions were evaluated in the wave forms, and the peak-to-peak distances were also measured to verify their reproducibility. Results: Thirteen of sixteen trajectories (81.3%) were successfully reproduced with Quasar. The peak-to-peak distances ranged from 2.7 to 29.0 mm. Three trajectories (18.7%) were not successfully reproduced due to the limited motions of the Quasar. Thus, 13 of 16 trajectories were summarized. The mean number of video images used for analysis was 1156. The positional errors (absolute mean difference + 2 standard deviation) ranged from 0.54 to 1.55 mm. The error values differed by less than 1 mm from 4D modeling function errors
Bailey, Randall E.; Wilz, Susan J.; Arthur, Jarvis J, III
NASA is investigating eXternal Visibility Systems (XVS) concepts which are a combination of sensor and display technologies designed to achieve an equivalent level of safety and performance to that provided by forward-facing windows in today s subsonic aircraft. This report provides the background for conceptual XVS design standards for display and sensor resolution. XVS resolution requirements were derived from the basis of equivalent performance. Three measures were investigated: a) human vision performance; b) see-and-avoid performance and safety; and c) see-to-follow performance. From these three factors, a minimum but perhaps not sufficient resolution requirement of 60 pixels per degree was shown for human vision equivalence. However, see-and-avoid and see-to-follow performance requirements are nearly double. This report also reviewed historical XVS testing.
Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien
Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.
Thorsdatter Orvedal Aase, Anne Lene
Full Text Available In this study we used a portable event-triggered video surveillance system for monitoring flower-visiting bumblebees. The system consist of mini digital recorder (mini-DVR with a video motion detection (VMD sensor which detects changes in the image captured by the camera, the intruder triggers the recording immediately. The sensitivity and the detection area are adjustable, which may prevent unwanted recordings. To our best knowledge this is the first study using VMD sensor to monitor flower-visiting insects. Observation of flower-visiting insects has traditionally been monitored by direct observations, which is time demanding, or by continuous video monitoring, which demands a great effort in reviewing the material. A total of 98.5 monitoring hours were conducted. For the mini-DVR with VMD, a total of 35 min were spent reviewing the recordings to locate 75 pollinators, which means ca. 0.35 sec reviewing per monitoring hr. Most pollinators in the order Hymenoptera were identified to species or group level, some were only classified to family (Apidae or genus (Bombus. The use of the video monitoring system described in the present paper could result in a more efficient data sampling and reveal new knowledge to pollination ecology (e.g. species identification and pollinating behaviour.
Bräger, S.; Chong, A.; Dawson, S.; Slooten, E.; Würsig, B.
One reason for the paucity of knowledge of dolphin social structure is the difficulty of measuring individual dolphins. In Hector's dolphins, Cephalorhynchus hectori, total body length is a function of age, and sex can be determined by individual colouration pattern. We developed a novel system combining stereo-photogrammetry and underwater-video to record dolphin group composition. The system consists of two downward-looking single-lens-reflex (SLR) cameras and a Hi8 video camera in an underwater housing mounted on a small boat. Bow-riding Hector's dolphins were photographed and video-taped at close range in coastal waters around the South Island of New Zealand. Three-dimensional, stereoscopic measurements of the distance between the blowhole and the anterior margin of the dorsal fin (BH-DF) were calibrated by a suspended frame with reference points. Growth functions derived from measurements of 53 dead Hector's dolphins (29 female : 24 male) provided the necessary reference data. For the analysis, the measurements were synchronised with corresponding underwater-video of the genital area. A total of 27 successful measurements (8 with corresponding sex) were obtained, showing how this new system promises to be potentially useful for cetacean studies.
Hsu, Chia-chun A.; Ling, Jim; Li, Qing; Kuo, C.-C. J.
The distributed Multiplayer Online Game (MOG) system is complex since it involves technologies in computer graphics, multimedia, artificial intelligence, computer networking, embedded systems, etc. Due to the large scope of this problem, the design of MOG systems has not yet been widely addressed in the literatures. In this paper, we review and analyze the current MOG system architecture followed by evaluation. Furthermore, we propose a clustered-server architecture to provide a scalable solution together with the region oriented allocation strategy. Two key issues, i.e. interesting management and synchronization, are discussed in depth. Some preliminary ideas to deal with the identified problems are described.
Reed, Judd E.; Johnson, C. Daniel
Computed tomographic colography (CTC or virtual colonoscopy) is a new technique for imaging the colon for the detection of colorectal neoplasms. Early clinical assessment of this procedure has shown that the performance of this test is acceptable for colorectal screening examinations. The current version of CTC utilizes an interactive combination of axial, reformatted 2D and 3D images (from an endoluminal perspective) that are generated in real time. Retained fluid in the lumen of the colon is a commonly encountered problem that can obscure lesions. Prone imaging in addition to standard supine views are often required to visualize obscured colonic segments. Although the colorectum is often seen optimally with combined supine and prone views, twice as much interpretation time is required with both acquisitions. The purpose of this study is to describe a novel system of synchronous display of supine and prone images of the colon. Simultaneous display of synchronized (anatomically registered) views of the colon eliminates the need for two separate readings of the colon and shortens interpretation time. This tool has all of the features of the original CTC interpretation system (presented in 1995) and includes recent innovations such as virtual pathology which is present in another paper within this Proceeding. The anatomic levels are indexed to match each other and advanced synchronously as the radiologist interprets the data set. Axial, reformatted 2D and 3D images are displayed and simultaneously updated for both prone and supine images on the same computer screen. The colon only needs to be reviewed once with the diagnostic benefit of both scans. In many cases, the two scans can be interpreted nearly as quickly as one. Conclusion: Synchronous display of prone and supine images of the colon is a new enhancement for CTC that combines with advantages of prone and supine views without the added interpretations time of reviewing two separate scans.
Sasaki, Hikaru; Shikida, Mitsuhiro; Sato, Kazuo
This paper describes a novel type of force transmission system for haptic display devices. The system consists of an array of end-effecter elements, a force/displacement transmitter and a single actuator producing a large force/displacement. It has tulip-shaped electrostatic clutch devices to distribute the force/displacement from the actuator among the individual end effecters. The specifications of three components were determined to stimulate touched human fingers. The components were fabricated by using micro-electromechanical systems and conventional machining technologies, and finally they were assembled by hand. The performance of the assembled transmission system was experimentally examined and it was confirmed that each projection in the arrayed end effecters could be moved individually. The actuator in a system whose total size was only 3.0 cm × 3.0 cm × 4.0 cm produced a 600 mN force and displaced individual array elements by 18 µm.
Seo, Young-Ho; Lee, Yoon-Hyuk; Koo, Ja-Myung; Kim, Woo-Youl; Yoo, Ji-Sang; Kim, Dong-Wook
We propose a new system that can generate digital holograms using natural color information. The system consists of a camera system for capturing images (object points) and software (S/W) for various image processing. The camera system uses a vertical rig, which is equipped with two depth and RGB cameras and a cold mirror, which has different reflectances according to wavelength for obtaining images with the same viewpoint. The S/W is composed of the engines for processing the captured images and executing computer-generated hologram for generating digital holograms using general-purpose graphics processing units. Each algorithm was implemented using C/C++ and CUDA languages, and all engines in the form of library were integrated in LabView environment. The proposed system can generate about 10 digital holographic frames per second using about 6 K object points.
Chen, Li; Zhang, Hao; Feng, Hailan; Zhang, Fengjun
This paper is aimed to develop a computerized three dimensional system for displaying and analyzing mandibular helical axis pathways. Mandibular movements were recorded using a six-degrees-of-freedom ultrasonic jaw movement recording device. The three-dimensional digital models of the midface and the mandible were reconstructed and segmented from CT skull images. The digital models were then transformed to the coordinate system of mandibular motion data by using an optical measuring system. The system was programmed on the base of the Visualization ToolKit and Open Scene Graphics Library. According to the motion data, transformation matrices were calculated to simulate mandibular movements. Meanwhile, mandibular helical axis pathways were calculated and displayed three dimensionally by means of an eigenvalues method. The following parameters of mandibular helical axis were calculated: the rotation around instantaneous helical axis, the translation along it, its spatial orientation, its position and distance relative to any special reference point. These parameters could be exported to describe comprehensively the whole mandiblular movements. It could be concluded that our system would contribute to the study of mandiblular helical axis pathways.
M. O. Kostishin
Full Text Available The paper deals with the issue of accuracy estimating for the object location display in the geographic information systems and display systems of manned aircrafts navigation complexes. Application features of liquid crystal screens with a different number of vertical and horizontal pixels are considered at displaying of geographic information data on different scales. Estimation display of navigation parameters values on board the aircraft is done in two ways: a numeric value is directly displayed on the screen of multi-color indicator, and a silhouette of the object is formed on the screen on a substrate background, which is a graphical representation of area map in the flight zone. Various scales of area digital map display currently used in the aviation industry have been considered. Calculation results of one pixel scale interval, depending on the specifications of liquid crystal screen and zoom of the map display area on the multifunction digital display, are given. The paper contains experimental results of the accuracy evaluation for area position display of the aircraft based on the data from the satellite navigation system and inertial navigation system, obtained during the flight program run of the real object. On the basis of these calculations a family of graphs was created for precision error display of the object reference point position using the onboard indicators with liquid crystal screen with different screen resolutions (6 "×8", 7.2 "×9.6", 9"×12" for two map display scales (1:0 , 25 km, 1-2 km. These dependency graphs can be used both to assess the error value of object area position display in existing navigation systems and to calculate the error value in upgrading facilities.
Endo, Chiaki; Sakurada, A; Kondo, T
Recently, endoscopic procedures including surgery, intervention, and examination have been widely performed. Medical practitioners are required to record the procedures precisely in order to check the procedures retrospectively and to get the legally reliable record. Medical Forensic System made by KS Olympus Japan offers 2 kinds of movie and patient's data, such as heart rate, blood pressure, and Spo, which are simultaneously recorded. We installed this system into the bronchoscopy room and have experienced its benefit. Under this system, we can get bronchoscopic image, bronchoscopy room view, and patient's data simultaneously. We can check the quality of the bronchoscopic procedures retrospectively, which is useful for bronchoscopy staff training. Medical Forensic System should be installed in any kind of endoscopic procedures.
Jihwan Park; Youngsun Kong; Yunyoung Nam
In order to remain in focus during head movements, vestibular-ocular reflex causes eyes to move in the opposite direction to head movement. Disorders of vestibular system decrease vision, causing abnormal nystagmus and dizziness. To diagnose abnormal nystagmus, various studies have been reported including the use of rotating chair tests and videonystagmography. However, these tests are unsuitable for home use due to their high costs. Thus, a low-cost video-oculography system is necessary to obtain clinical features at home. In this paper, we present a low-cost video-oculography system using an infrared camera and Raspberry Pi board for tracking the pupils and evaluating a vestibular system. Horizontal eye movement is derived from video data obtained from an infrared camera and infrared light-emitting diodes, and the velocity of head rotation is obtained from a gyroscope sensor. Each pupil was extracted using a morphology operation and a contour detection method. Rotatory chair tests were conducted with our developed device. To evaluate our system, gain, asymmetry, and phase were measured and compared with System 2000. The average IQR errors of gain, phase and asymmetry were 0.81, 2.74 and 17.35, respectively. We showed that our system is able to measure clinical features.
Roh, Mootaek; McHugh, Thomas J; Lee, Kyungmin
To investigate the relationship between neural function and behavior it is necessary to record neuronal activity in the brains of freely behaving animals, a technique that typically involves tethering to a data acquisition system. Optimally this approach allows animals to behave without any interference of movement or task performance. Currently many laboratories in the cognitive and behavioral neuroscience fields employ commercial motorized commutator systems using torque sensors to detect tether movement induced by the trajectory behaviors of animals. In this study we describe a novel motorized commutator system which is automatically controlled by video tracking. To obtain accurate head direction data two light emitting diodes were used and video image noise was minimized by physical light source manipulation. The system calculates the rotation of the animal across a single trial by processing head direction data and the software, which calibrates the motor rotation angle, subsequently generates voltage pulses to actively untwist the tether. This system successfully provides a tether twist-free environment for animals performing behavioral tasks and simultaneous neural activity recording. To the best of our knowledge, it is the first to utilize video tracking generated head direction to detect tether twisting and compensate with a motorized commutator system. Our automatic commutator control system promises an affordable and accessible method to improve behavioral neurophysiology experiments, particularly in mice.
Full Text Available This paper presents an efficient video filtering scheme and its implementation in a field-programmable logic device (FPLD. Since the proposed nonlinear, spatiotemporal filtering scheme is based on order statistics, its efficient implementation benefits from a bit-serial realization. The utilization of both the spatial and temporal correlation characteristics of the processed video significantly increases the computational demands on this solution, and thus, implementation becomes a significant challenge. Simulation studies reported in this paper indicate that the proposed pipelined bit-serial FPLD filtering solution can achieve speeds of up to 97.6 Mpixels/s and consumes 1700 to 2700 logic cells for the speed-optimized and area-optimized versions, respectively. Thus, the filter area represents only 6.6 to 10.5% of the Altera STRATIX EP1S25 device available on the Altera Stratix DSP evaluation board, which has been used to implement a prototype of the entire real-time vision system. As such, the proposed adaptive video filtering scheme is both practical and attractive for real-time machine vision and surveillance systems as well as conventional video and multimedia applications.
M.Sc. (Computer Science) A video conference is an interactive meeting between two or more locations, facilitated by simultaneous two-way video and audio transmissions. People in a video conference, also known as participants, join these video conferences for business and recreational purposes. In a typical video conference, we should properly identify and authenticate every participant in the video conference, if information discussed during the video conference is confidential. This preve...
Schonfeld, Dan; Lelescu, Dan
In this paper, a novel visual search engine for video retrieval and tracking from compressed multimedia databases is proposed. Our approach exploits the structure of video compression standards in order to perform object matching directly on the compressed video data. This is achieved by utilizing motion compensation--a critical prediction filter embedded in video compression standards--to estimate and interpolate the desired method for template matching. Motion analysis is used to implement fast tracking of objects of interest on the compressed video data. Being presented with a query in the form of template images of objects, the system operates on the compressed video in order to find the images or video sequences where those objects are presented and their positions in the image. This in turn enables the retrieval and display of the query-relevant sequences.
Ziemke, Robert A.
The objective of the High Resolution, High Frame Rate Video Technology (HHVT) development effort is to provide technology advancements to remove constraints on the amount of high speed, detailed optical data recorded and transmitted for microgravity science and application experiments. These advancements will enable the development of video systems capable of high resolution, high frame rate video data recording, processing, and transmission. Techniques such as multichannel image scan, video parameter tradeoff, and the use of dual recording media were identified as methods of making the most efficient use of the near-term technology.
Full Text Available Studies of microbial cell envelopes and particularly cell surface proteins and mechanisms of their localization brought about new biotechnological applications of the gained knowledge in surface display of homologous and heterologous proteins. By fusing surface proteins or their anchoring domains with different proteins of interest, their so-called genetic immobilization is achieved. Hybrid proteins are engineered in a way that they are expressed in the host cells, secreted to the cell surface and incorporated into the wall/ envelope moiety. In this way, laborious and often detrimental procedure of chemical immobilization of the protein is avoided by letting the cells do the whole procedure. Both bacterial and yeast cells have been used for this purpose and a number of potential biotechnological applications of surface-displayed proteins have been reported. Among the most frequently used passenger proteins are lipolytic enzymes, due to their great technological significance and numerous important applications. In this review, our current knowledge on mechanisms and molecular systems for surface display of lipolytic enzymes on bacterial and yeast cell surfaces is summarized.
Ichihashi, Yasuyuki; Yamamoto, Kenji
This paper describes electronic holography output of three-dimensional (3D) video with integral photography as input. A real-time 3D image reconstruction system was implemented by using a 4K (3840×2160) resolution IP camera to capture 3D images and converting them to 8K (7680×4320) resolution holograms. Multiple graphics processing units (GPUs) were used to create 8K holograms from 4K IP images. In addition, higher resolution holograms were created to successfully reconstruct live-scene video having a diagonal size of 6 cm using a large electronic holography display.
Lee, Kang Oh; Nakaji, Kei; 中司, 敬
A web-based video direct e-commerce system was developed to solve the problems in the internet shopping and to increase trust in safety and quality of agricultural products from consumers. We found that the newly developed e-commerce system could overcome demerits of the internet shopping and give consumers same effects as purchasing products offline. Producers could have opportunities to explain products and to talk to customers and get increased income because of maintaining a certain numbe...
Thompson, J. G.; Young, K. R.
A hierarchical structure of the interlinked programs was developed to provide a flexible computer-aided design tool. A graphical input technique and a data structure are considered which provide the capability of entering the control system model description into the computer in block diagram form. An information storage and retrieval system was developed to keep track of the system description, and analysis and simulation results, and to provide them to the correct routines for further manipulation or display. Error analysis and diagnostic capabilities are discussed, and a technique was developed to reduce a transfer function to a set of nested integrals suitable for digital simulation. A general, automated block diagram reduction procedure was set up to prepare the system description for the analysis routines.
Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel
Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.
Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel
Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753
Full Text Available Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.
Full Text Available Abstract Background The users of today's commercial prosthetic hands are not given any conscious sensory feedback. To overcome this deficiency in prosthetic hands we have recently proposed a sensory feedback system utilising a "tactile display" on the remaining amputation residual limb acting as man-machine interface. Our system uses the recorded pressure in a hand prosthesis and feeds back this pressure onto the forearm skin. Here we describe the design and technical solution of the sensory feedback system aimed at hand prostheses for trans-radial/humeral amputees. Critical parameters for the sensory feedback system were investigated. Methods A sensory feedback system consisting of five actuators, control electronics and a test application running on a computer has been designed and built. Firstly, we investigate which force levels were applied to the forearm skin of the user while operating the sensory feedback system. Secondly, we study if the proposed system could be used together with a myoelectric control system. The displacement of the skin caused by the sensory feedback system would generate artefacts in the recorded myoelectric signals. Accordingly, EMG recordings were performed and an analysis of the these are included. The sensory feedback system was also preliminarily evaluated in a laboratory setting on two healthy non-amputated test subjects with a computer generating the stimuli, with regards to spatial resolution and force discrimination. Results We showed that the sensory feedback system generated approximately proportional force to the angle of control. The system can be used together with a myoelectric system as the artefacts, generated by the actuators, were easily removed using a simple filter. Furthermore, the application of the system on two test subjects showed that they were able to discriminate tactile sensation with regards to spatial resolution and level of force. Conclusions The results of these initial experiments
Background The users of today's commercial prosthetic hands are not given any conscious sensory feedback. To overcome this deficiency in prosthetic hands we have recently proposed a sensory feedback system utilising a "tactile display" on the remaining amputation residual limb acting as man-machine interface. Our system uses the recorded pressure in a hand prosthesis and feeds back this pressure onto the forearm skin. Here we describe the design and technical solution of the sensory feedback system aimed at hand prostheses for trans-radial/humeral amputees. Critical parameters for the sensory feedback system were investigated. Methods A sensory feedback system consisting of five actuators, control electronics and a test application running on a computer has been designed and built. Firstly, we investigate which force levels were applied to the forearm skin of the user while operating the sensory feedback system. Secondly, we study if the proposed system could be used together with a myoelectric control system. The displacement of the skin caused by the sensory feedback system would generate artefacts in the recorded myoelectric signals. Accordingly, EMG recordings were performed and an analysis of the these are included. The sensory feedback system was also preliminarily evaluated in a laboratory setting on two healthy non-amputated test subjects with a computer generating the stimuli, with regards to spatial resolution and force discrimination. Results We showed that the sensory feedback system generated approximately proportional force to the angle of control. The system can be used together with a myoelectric system as the artefacts, generated by the actuators, were easily removed using a simple filter. Furthermore, the application of the system on two test subjects showed that they were able to discriminate tactile sensation with regards to spatial resolution and level of force. Conclusions The results of these initial experiments in non-amputees indicate that
Gross, Martin; Mayer, Udo; Kaufhold, Rainer
Adverse weather conditions affect flight safety as well as productivity of the air traffic industry. The problem becomes evident in the airport area (Taxiing, takeoff, approach and landing). The productivity of the air traffic industry goes down because the resources of the airport can not be used optimally. Canceled and delayed flights lead directly to additional costs for the airlines. Against the background of aggravated problems due to a predicted increasing air traffic the European Union launched the project AWARD (All Weather ARrival and Departure) in June 1996. Eleven European aerospace companies and research institutions are participating. The project will be finished by the end of 1999. Subject of AWARD is the development of a Synthetic Vision System (based on database and navigation) and an Enhanced Vision System (based on sensors like FLIR and MMWR). Darmstadt University of Technology is responsible for the development of the SVS prototype. The SVS application is depending on precise navigation, databases for terrain and flight relevant information, and a flight guidance display. The objective is to allow landings under CAT III a/b conditions independently from CAT III ILS airport installations. One goal of SVS is to enhance the situation awareness of pilots during all airport area operations by designing an appropriate man-machine- interface for the display. This paper describes the current state of the research and development of the Synthetic Vision System being developed in AWARD. The paper describes which methodology was used to identify the information that should be displayed. Human factors which influenced the basic design of the SVS are portrayed and some of the planned activities for the flight simulation tests are summarized.
Chen, Yen-Lin; Liang, Wen-Yew; Chiang, Chuan-Yen; Hsieh, Tung-Ju; Lee, Da-Cheng; Yuan, Shyan-Ming; Chang, Yang-Lang
This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions. PMID:22163990
Full Text Available This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions.
Zhiwei, Jia; Guozheng, Yan; Bingquan, Zhu
Wireless power transmission is considered a practical way of overcoming the power shortage of wireless capsule endoscopy (VCE). However, most patients cannot tolerate the long hours of lying in a fixed transmitting coil during diagnosis. To develop a portable wireless power transmission system for VCE, a compact transmitting coil and a portable inverter circuit driven by rechargeable batteries are proposed. The couple coils, optimized considering the stability and safety conditions, are 28 turns of transmitting coil and six strands of receiving coil. The driven circuit is designed according to the portable principle. Experiments show that the integrated system could continuously supply power to a dual-head VCE for more than 8 h at a frame rate of 30 frames per second with resolution of 320 × 240. The portable VCE exhibits potential for clinical applications, but requires further improvement and tests.
Dillavou, Marcus W.; Shum, Phillip Corey; Guthrie, Baron L.; Shenai, Mahesh B.; Deaton, Drew Steven; May, Matthew Benton
Provided herein are methods and systems for image registration from multiple sources. A method for image registration includes rendering a common field of interest that reflects a presence of a plurality of elements, wherein at least one of the elements is a remote element located remotely from another of the elements and updating the common field of interest such that the presence of the at least one of the elements is registered relative to another of the elements.
Moshirnia, Andrew; Israel, Maya
Despite the increasing popularity of many commercial video games, this popularity is not shared by educational video games. Modified video games, however, can bridge the gap in quality between commercial and education video games by embedding educational content into popular commercial video games. This study examined how different information…
Cai, Lin; Deng, Nianchun; Xiao, Zexin
The cables in anchorage zone of cable-stayed bridge are hidden within the embedded pipe, which leads to the difficulty for detecting the damage of the cables with visual inspection. We have built a detection device based on high-resolution video capture, realized the distance observing of invisible segment of stay cable and damage detection of outer surface of cable in the small volume. The system mainly consists of optical stents and precision mechanical support device, optical imaging system, lighting source, drived motor control and IP camera video capture system. The principal innovations of the device are ⑴A set of telescope objectives with three different focal lengths are designed and used in different distances of the monitors by means of converter. ⑵Lens system is far separated with lighting system, so that the imaging optical path could effectively avoid the harsh environment which would be in the invisible part of cables. The practice shows that the device not only can collect the clear surveillance video images of outer surface of cable effectively, but also has a broad application prospect in security warning of prestressed structures.
Full Text Available Video content has increased much on the Internet during last years. In spite of the efforts of different organizations and governments to increase the accessibility of websites, most multimedia content on the Internet is not accessible. This paper describes a system that contributes to make multimedia content more accessible on the Web, by automatically translating subtitles in oral language to SignWriting, a way of writing Sign Language. This system extends the functionality of a general web platform that can provide accessible web content for different needs. This platform has a core component that automatically converts any web page to a web page compliant with level AA of WAI guidelines. Around this core component, different adapters complete the conversion according to the needs of specific users. One adapter is the Deaf People Accessibility Adapter, which provides accessible web content for the Deaf, based on SignWritting. Functionality of this adapter has been extended with the video subtitle translator system. A first prototype of this system has been tested through different methods including usability and accessibility tests and results show that this tool can enhance the accessibility of video content available on the Web for Deaf people.
Allen, A. J.; Terry, J. L.; Garnier, D.; Stillerman, J. A.; Wurden, G. A.
A new system for routine digitization of video images is presently operating on the Alcator C-Mod tokamak. The PC-based system features high resolution video capture, storage, and retrieval. The captured images are stored temporarily on the PC, but are eventually written to CD. Video is captured from one of five filtered RS-170 CCD cameras at 30 frames per second (fps) with 640×480 pixel resolution. In addition, the system can digitize the output from a filtered Kodak Ektapro EM Digital Camera which captures images at 1000 fps with 239×192 resolution. Present views of this set of cameras include a wide angle and a tangential view of the plasma, two high resolution views of gas puff capillaries embedded in the plasma facing components, and a view of ablating, high speed Li pellets. The system is being used to study (1) the structure and location of visible emissions (including MARFEs) from the main plasma and divertor, (2) asymmetries in gas puff plumes due to flows in the scrape-off layer (SOL), and (3) the tilt and cigar-shaped spatial structure of the Li pellet ablation cloud.
Han, Xiaoguang; Fu, Hongbo; Zheng, Hanlin; Liu, Ligang; Wang, Jue
Stop-motion is a well-established animation technique but is often laborious and requires craft skills. A new video-based system can animate the vast majority of everyday objects in stop-motion style, more flexibly and intuitively. Animators can perform and capture motions continuously instead of breaking them into increments and shooting one still picture per increment. More important, the system permits direct hand manipulation without resorting to rigs, achieving more natural object control for beginners. The system's key component is two-phase keyframe-based capturing and processing, assisted by computer vision techniques. With this system, even amateurs can generate high-quality stop-motion animations.
Martin, Benjamin M.; Irwin, Elise R.
We designed a digital underwater video camera system to monitor nesting centrarchid behavior in the Tallapoosa River, Alabama, 20 km below a peaking hydropower dam with a highly variable flow regime. Major components of the system included a digital video recorder, multiple underwater cameras, and specially fabricated substrate stakes. The innovative design of the substrate stakes allowed us to effectively observe nesting redbreast sunfish Lepomis auritus in a highly regulated river. Substrate stakes, which were constructed for the specific substratum complex (i.e., sand, gravel, and cobble) identified at our study site, were able to withstand a discharge level of approximately 300 m3/s and allowed us to simultaneously record 10 active nests before and during water releases from the dam. We believe our technique will be valuable for other researchers that work in regulated rivers to quantify behavior of aquatic fauna in response to a discharge disturbance.
Longmore, S. P.; Bikos, D.; Szoke, E.; Miller, S. D.; Brummer, R.; Lindsey, D. T.; Hillger, D.
The increasing use of mobile phones equipped with digital cameras and the ability to post images and information to the Internet in real-time has significantly improved the ability to report events almost instantaneously. In the context of severe weather reports, a representative digital image conveys significantly more information than a simple text or phone relayed report to a weather forecaster issuing severe weather warnings. It also allows the forecaster to reasonably discern the validity and quality of a storm report. Posting geo-located, time stamped storm report photographs utilizing a mobile phone application to NWS social media weather forecast office pages has generated recent positive feedback from forecasters. Building upon this feedback, this discussion advances the concept, development, and implementation of a formalized Photo Storm Report (PSR) mobile application, processing and distribution system and Advanced Weather Interactive Processing System II (AWIPS-II) plug-in display software.The PSR system would be composed of three core components: i) a mobile phone application, ii) a processing and distribution software and hardware system, and iii) AWIPS-II data, exchange and visualization plug-in software. i) The mobile phone application would allow web-registered users to send geo-location, view direction, and time stamped PSRs along with severe weather type and comments to the processing and distribution servers. ii) The servers would receive PSRs, convert images and information to NWS network bandwidth manageable sizes in an AWIPS-II data format, distribute them on the NWS data communications network, and archive the original PSRs for possible future research datasets. iii) The AWIPS-II data and exchange plug-ins would archive PSRs, and the visualization plug-in would display PSR locations, times and directions by hour, similar to surface observations. Hovering on individual PSRs would reveal photo thumbnails and clicking on them would display the
R. Dulaney, D.; Hopfensperger, M.; Malinowski, R.; Hauptman, J.; Kruger, J M
Background Urinary disorders in cats often require subjective caregiver quantification of clinical signs to establish a diagnosis and monitor therapeutic outcomes. Objective To investigate use of a video recording system (VRS) to better assess and quantify urination behaviors in cats. Animals Eleven healthy cats and 8 cats with disorders potentially associated with abnormal urination patterns. Methods Prospective study design. Litter box urination behaviors were quantified with a VRS for 14 d...
A microprocessor has been used to provide the major control functions in the Telemation/Sandia unattended video surveillance system. The software in the microprocessor provides control of the various hardware components and provides the capability of interactive communications with the operator. This document, in conjunction with the commented source listing, defines the philosophy and function of the software. It is assumed that the reader is familiar with the RCA 1802 COSMAC microprocessor and has a reasonable computer science background.
Xia, Xue; Qiu, Yun; Hu, Lin; Fan, Jingchao; Guo, Xiuming; Zhou, Guomin
International audience; As the proposition of the ‘Internet plus’ concept and speedy progress of new media technology, traditional business have been increasingly shared in the development fruits of the informatization and the networking. Proceeding from the real plant protection demands, the construction of a cloud-based video monitoring system that surveillances diseases and pests in apple orchards has been discussed, aiming to solve the lack of timeliness and comprehensiveness in the contr...
Carlo, Leonardo De, E-mail: email@example.com [Gran Sasso Science Institute (GSSI) (Italy); Gentile, Guido, E-mail: firstname.lastname@example.org; Giuliani, Alessandro, E-mail: email@example.com [Università degli Studi Roma Tre, Dipartimento di Matematica e Fisica (Italy)
We consider a three-dimensional chaotic system consisting of the suspension of Arnold’s cat map coupled with a clock via a weak dissipative interaction. We show that the coupled system displays a synchronization phenomenon, in the sense that the relative phase between the suspension flow and the clock locks to a special value, thus making the motion fall onto a lower dimensional attractor. More specifically, we construct the attractive invariant manifold, of dimension smaller than three, using a convergent perturbative expansion. Moreover, we compute via convergent series the Lyapunov exponents, including notably the central one. The result generalizes a previous construction of the attractive invariant manifold in a similar but simpler model. The main novelty of the current construction relies in the computation of the Lyapunov spectrum, which consists of non-trivial analytic exponents. Some conjectures about a possible smoothening transition of the attractor as the coupling is increased are also discussed.
Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.
Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.
Granaas, Michael M.; Rhea, Donald C.
The requirements for the development of real-time displays are reviewed. Of particular interest are the psychological aspects of design such as the layout, color selection, real-time response rate, and the interactivity of displays. Some existing Western Aeronautical Test Range displays are analyzed.
McEnery, K W; Suitor, C T; Hildebrand, S; Downs, R
RadStation is a digital dictation system having an integrated display of clinical information. The three-tiered system architecture provides robust performance, with most information displayed within one second after a request. The multifunctional client tier is a unique client/browser hybrid. A Web browser display window functions as the client application's data display window for clinical information, radiology reports, and laboratory and pathology results. RadStation provides a robust platform for digital dictation functionality. The system's internal status checks ensure operational integrity in a clinical environment. Also, the programmable dictation microphone and bar-code reader supplant the mouse as the system's primary input device. By merging information queries into existing work flow, radiologist's interpretation efficiency is maintained with instant access to essential clinical information. Finally, RadStation requires minimal training and has been enthusiastically accepted by our radiologists in an active clinical practice.
Zabołotny, Wojciech M.; Pastuszak, Grzegorz; Sokół, Grzegorz; Borowik, Grzegorz; GÄ ska, Michał; Kasprowicz, Grzegorz H.; Poźniak, Krzysztof T.; Abramowski, Andrzej; Buchowicz, Andrzej; Trochimiuk, Maciej; Frasunek, Przemysław; Jurkiewicz, Rafał; Nalbach-Moszynska, Małgorzata; Wawrzusiak, Radosław; Bukowiecka, Danuta; Tyburska, Agata; Struniawski, Jarosław; Jastrzebski, Paweł; Jewartowski, BłaŻej; Brawata, Sebastian; Bubak, Iwona; Gloza, Małgorzata
The paper describes the prototype implemetantion of the Video Signals Integrator (VSI). The function of the system is to integrate video signals from many sources. The VSI is a complex hybrid system consisting of hardware, firmware and software components. Its creation requires joint effort of experts from different areas. The VSI capture device is a portable hardware device responsible for capturing of video signals from different different sources and in various formats, and for transmitting them to the server. The NVR server aggregates video and control streams coming from different sources and multiplexes them into logical channels with each channel representing a single source. From there each channel can be distributed further to the end clients (consoles) for live display via a number of RTSP servers. The end client can, at the same time, inject control messages into a given channel to control movement of a CCTV camera.
Renner, Adam P.
Helmet mounted displays have not been supported with adequate methods and materials to validate and verify the performance of the underlying tracking systems when tested in a simulated or operational environment. Like most electronic systems on aircraft, HMDs evolve over the lifecycle of the system due to requirements changes or diminishing manufacturing sources. Hardware and software bugs are often introduced as the design evolves and it is necessary to revalidate a systems performance attributes over the course of these design changes. An on-aircraft test has been developed and refined to address this testing gap for the Joint Helmet Mounted Cueing System (JHMCS) on F-16 aircraft. This test can be readily ported to other aircraft systems which employ the JHMCS, and has already been ported to the F-18. Additionally, this test method could provide an added value in the testing of any HMD that requires accurate cueing, whether used on fixed or rotary wing aircraft.
Lim, Yongjun; Hong, Keehoon; Kim, Hayan; Choo, Hyon-gon; Park, Minsik; Kim, Jinwoong
In this paper, we use an optical method for the implementation of spatially-tiled digital micro-mirror devices (DMDs) to expand space bandwidth product in general digital holographic display systems. In concatenating more than two spatial light modulators (SLMs) optically, there may exist both phase discontinuity and amplitude mismatching of hologram images emanating from two adjacent SLMs. To observe and estimate those properties in digital holographic display systems, we adopt quantitative phase imaging technique based on transport of intensity equation.
Clark, Shane; Petersen, John E; Frantz, Cindy M; Roose, Deborah; Ginn, Joel; Rosenberg Daneri, Daniel
Tackling complex environmental challenges requires the capacity to understand how relationships and interactions between parts result in dynamic behavior of whole systems. There has been convincing research that these "systems thinking" skills can be learned. However, there is little research on methods for teaching these skills to children or assessing their impact. The Environmental Dashboard is a technology that uses "sociotechnical" feedback-information feedback designed to affect thought and behavior. Environmental Dashboard (ED) combines real-time information on community resource use with images and words that reflect pro-environmental actions of community members. Prior research indicates that ED supports the development of systems thinking in adults. To assess its impact on children, the technology was installed in a primary school and children were passively exposed to ED displays. This resulted in no measurable impact on systems thinking skills. The next stage of this research examined the impact of actively integrating ED into lessons on electricity in 4th and 5th grade. This active integration enhanced both content-related systems thinking skills and content retention.
Jubran, Mohammad K; Bansal, Manu; Kondi, Lisimachos P; Grover, Rohan
In this paper, we propose an optimal strategy for the transmission of scalable video over packet-based multiple-input multiple-output (MIMO) systems. The scalable extension of H.264/AVC that provides a combined temporal, quality and spatial scalability is used. For given channel conditions, we develop a method for the estimation of the distortion of the received video and propose different error concealment schemes. We show the accuracy of our distortion estimation algorithm in comparison with simulated wireless video transmission with packet errors. In the proposed MIMO system, we employ orthogonal space-time block codes (O-STBC) that guarantee independent transmission of different symbols within the block code. In the proposed constrained bandwidth allocation framework, we use the estimated end-to-end decoder distortion to optimally select the application layer parameters, i.e., quantization parameter (QP) and group of pictures (GOP) size, and physical layer parameters, i.e., rate-compatible turbo (RCPT) code rate and symbol constellation. Results show the substantial performance gain by using different symbol constellations across the scalable layers as compared to a fixed constellation.
Full Text Available Digital Video Recorder (DVR is a digital video recorder with hard drive storage media. When the capacity of the hard disk runs out. It will provide information to users and if there is no response, it will be overwritten automatically and the data will be lost. The main focus of this paper is to enable recording directly connected to a computer editor. The output of both systems (DVR and Direct Recording will be compared with an objective assessment using the Mean Square Error (MSE and Peak Signal to Noise Ratio (PSNR parameter. The results showed that the average value of MSE Direct Recording dB 797.8556108, 137.4346100 DVR MSE dB and the average value of PSNR Direct Recording and DVR PSNR dB 19.5942333 27.0914258 dB. This indicates that the DVR has a much better output quality than Direct Recording.
Machireddy, Archana; van Santen, Jan; Wilson, Jenny L; Myers, Julianne; Hadders-Algra, Mijna; Xubo Song
Cerebral palsy is a non-progressive neurological disorder occurring in early childhood affecting body movement and muscle control. Early identification can help improve outcome through therapy-based interventions. Absence of so-called "fidgety movements" is a strong predictor of cerebral palsy. Currently, infant limb movements captured through either video cameras or accelerometers are analyzed to identify fidgety movements. However both modalities have their limitations. Video cameras do not have the high temporal resolution needed to capture subtle movements. Accelerometers have low spatial resolution and capture only relative movement. In order to overcome these limitations, we have developed a system to combine measurements from both camera and sensors to estimate the true underlying motion using extended Kalman filter. The estimated motion achieved 84% classification accuracy in identifying fidgety movements using Support Vector Machine.
Full Text Available This paper presents a parallel TBB-CUDA implementation for the acceleration of single-Gaussian distribution model, which is effective for background removal in the video-based fire detection system. In this framework, TBB mainly deals with initializing work of the estimated Gaussian model running on CPU, and CUDA performs background removal and adaption of the model running on GPU. This implementation can exploit the combined computation power of TBB-CUDA, which can be applied to the real-time environment. Over 220 video sequences are utilized in the experiments. The experimental results illustrate that TBB+CUDA can achieve a higher speedup than both TBB and CUDA. The proposed framework can effectively overcome the disadvantages of limited memory bandwidth and few execution units of CPU, and it reduces data transfer latency and memory latency between CPU and GPU.
Wickramasuriya, Jehan; Alhazzazi, Mohanned; Datt, Mahesh; Mehrotra, Sharad; Venkatasubramanian, Nalini
Forms of surveillance are very quickly becoming an integral part of crime control policy, crisis management, social control theory and community consciousness. In turn, it has been used as a simple and effective solution to many of these problems. However, privacy-related concerns have been expressed over the development and deployment of this technology. Used properly, video cameras help expose wrongdoing but typically come at the cost of privacy to those not involved in any maleficent activity. This work describes the design and implementation of a real-time, privacy-protecting video surveillance infrastructure that fuses additional sensor information (e.g. Radio-frequency Identification) with video streams and an access control framework in order to make decisions about how and when to display the individuals under surveillance. This video surveillance system is a particular instance of a more general paradigm of privacy-protecting data collection. In this paper we describe in detail the video processing techniques used in order to achieve real-time tracking of users in pervasive spaces while utilizing the additional sensor data provided by various instrumented sensors. In particular, we discuss background modeling techniques, object tracking and implementation techniques that pertain to the overall development of this system.
Full Text Available Human noroviruses (HuNoVs are the dominant cause of food-borne outbreaks of acute gastroenteritis. However, fundamental researches on HuNoVs, such as identification of viral receptors have been limited by the currently immature system to culture HuNoVs and the lack of efficient small animal models. Previously, we demonstrated that the recombinant protruding domain (P domain of HuNoVs capsid proteins were successfully anchored on the surface of Escherichia coli BL21 cells after the bacteria were transformed with a plasmid expressing HuNoVs P protein fused with bacterial transmembrane anchor protein. The cell-surface-displayed P proteins could specifically recognize and bind to histo-blood group antigens (HBGAs, receptors of HuNoVs. In this study, an upgraded bacterial surface displayed system was developed as a new platform to discover candidate receptors of HuNoVs. A thrombin-susceptible “linker” sequence was added between the sequences of bacterial transmembrane anchor protein and P domain of HuNoV (GII.4 capsid protein in a plasmid that displays the functional P proteins on the surface of bacteria. In this new system, the surface-displayed HuNoV P proteins could be released by thrombin treatment. The released P proteins self-assembled into small particles, which were visualized by electron microscopy. The bacteria with the surface-displayed P proteins were incubated with pig stomach mucin which contained HBGAs. The bacteria-HuNoV P proteins-HBGAs complex could be collected by low speed centrifugation. The HuNoV P proteins-HBGAs complex was then separated from the recombinant bacterial surface by thrombin treatment. The released viral receptor was confirmed by using the monoclonal antibody against type A HBGA. It demonstrated that the new system was able to capture and easily isolate receptors of HuNoVs. This new strategy provides an alternative, easier approach for isolating unknown receptors/ligands of HuNoVs from different samples
Levin, E.; Sergeyev, A.
In this paper we describe multidisciplinary experimental research concentrated on stereoscopic presentation of geospatial imagery data obtained from various sensors. Source data were different in scale, texture, geometry and content. None of image processing techniques allows processing such a data simultaneously. However, augmented reality system allows subjects to fuse multi-sensor, multi-temporal data and terrain reality into single model. Augmented reality experimental set, based on head-mounted display was designed to efficiently superimpose LIDAR point-clouds for comfortable stereoscopic perception. Practical research experiment performed indicates feasibility of the stereoscopic perception data obtained on-the-fly. One of the most interesting findings is that source LIDAR point-clouds do not have to be preprocessed or enhanced for being in the experiments described.
Betancur, J. Alejandro; Osorio, Gilberto; Mejía, Alejandro
Throughout the development of the automotive industry, supporting activities related with driving has been material of analysis and experimentation, always seeking new ways to achieve greater safety for the driver and passengers. With the purpose of contributing to this topic, in order to contribute to this subject, this paper summarizes from past research experiences the use of Head-Up Display systems applied to the automobile industry, covering it from two main points of discussion: the first one, from a technical point of view, in which the main principles of optical design associated with a moderate-cost experimental set up are brought out; and the second one, an operational approach where an applied driving graphical interface is presented. Up to now, the results suggest that the experimental set up here discussed could be adaptable to any automobile vehicle, but it is needed further research and investment.
Rhee, Taehyun; Petikam, Lohit; Allen, Benjamin; Chalmers, Andrew
This paper presents a novel immersive system called MR360 that provides interactive mixed reality (MR) experiences using a conventional low dynamic range (LDR) 360° panoramic video (360-video) shown in head mounted displays (HMDs). MR360 seamlessly composites 3D virtual objects into a live 360-video using the input panoramic video as the lighting source to illuminate the virtual objects. Image based lighting (IBL) is perceptually optimized to provide fast and believable results using the LDR 360-video as the lighting source. Regions of most salient lights in the input panoramic video are detected to optimize the number of lights used to cast perceptible shadows. Then, the areas of the detected lights adjust the penumbra of the shadow to provide realistic soft shadows. Finally, our real-time differential rendering synthesizes illumination of the virtual 3D objects into the 360-video. MR360 provides the illusion of interacting with objects in a video, which are actually 3D virtual objects seamlessly composited into the background of the 360-video. MR360 was implemented in a commercial game engine and tested using various 360-videos. Since our MR360 pipeline does not require any pre-computation, it can synthesize an interactive MR scene using a live 360-video stream while providing realistic high performance rendering suitable for HMDs.
Glover, R. D.
The NASA Dryden Flight Research Facility has developed a microprocessor-based, user-programmable, general-purpose aircraft interrogation and display system (AIDS). The hardware and software of this ground-support equipment have been designed to permit diverse applications in support of aircraft digital flight-control systems and simulation facilities. AIDS is often employed to provide engineering-units display of internal digital system parameters during development and qualification testing. Such visibility into the system under test has proved to be a key element in the final qualification testing of aircraft digital flight-control systems. Three first-generation 8-bit units are now in service in support of several research aircraft projects, and user acceptance has been high. A second-generation design, extended AIDS (XAIDS), incorporating multiple 16-bit processors, is now being developed to support the forward swept wing aircraft project (X-29A). This paper outlines the AIDS concept, summarizes AIDS operational experience, and describes the planned XAIDS design and mechanization.
Momcilovic, Svetislav; Sousa, Leonel
In this work scalable parallelization methods for computing in real-time the H.264/AVC on multi-cores platforms, such as the most recent Graphical Processing Units (GPUs) and Cell Broadband Engine (Cell/BE), are proposed. By applying the Amdahl's law, the most demanding parts of the video coder were identified and the Single Program Multiple Data and Single Instruction Multiple Data approaches are adopted for achieving real-time processing. In particular, video motion estimation and in-loop deblocking filtering were offloaded to be executed in parallel on either GPUs or Cell/BE Synergistic Processor Elements (SPEs). The limits and advantages of these two architectures when dealing with typical video coding problems, such as data dependencies and large input data are demonstrated. We propose techniques to minimize the impact of branch divergences and branch misprediction, data misalignment, conflicts and non-coalesced memory accesses. Moreover, data dependencies and memory size restrictions are taken into account in order to minimize synchronization and communication time overheads, and to achieve the optimal workload balance given the available multiple cores. Data reusing technique is extensively applied for reducing communication overhead, in order to achieve the maximum processing speedup. Experimental results show that real time H.264/AVC is achieved in both systems by computing 30 frames per second, with a resolution of 720×576 pixels, when full-pixel motion estimation is applied over 5 reference frames and 32×32 search area. When quarter-pixel motion estimation is adopted, real time video coding is obtained on GPU for larger search area and on Cell/BE for smaller search areas.
Most designers are not schooled in the area of human-interaction psychology and therefore tend to rely on the traditional ergonomic aspects of human factors when designing complex human-interactive workstations related to reactor operations. They do not take into account the differences in user information processing behavior and how these behaviors may affect individual and team performance when accessing visual displays or utilizing system models in process and control room areas. Unfortunately, by ignoring the importance of the integration of the user interface at the information process level, the result can be sub-optimization and inherently error- and failure-prone systems. Therefore, to minimize or eliminate failures in human-interactive systems, it is essential that the designers understand how each user`s processing characteristics affects how the user gathers information, and how the user communicates the information to the designer and other users. A different type of approach in achieving this understanding is Neuro Linguistic Programming (NLP). The material presented in this paper is based on two studies involving the design of visual displays, NLP, and the user`s perspective model of a reactor system. The studies involve the methodology known as NLP, and its use in expanding design choices from the user`s ``model of the world,`` in the areas of virtual reality, workstation design, team structure, decision and learning style patterns, safety operations, pattern recognition, and much, much more.
Lee, Chang-Kun; Lee, Taewon; Sung, Hyunsik; Min, Sung-Wook
A design method for the wedge projection display system based on the ray retracing method is proposed. To analyze the principle of image formation on the inclined surface of the wedge-shaped waveguide, the bundle of rays is retraced from an imaging point on the inclined surface to the aperture of the waveguide. In consequence of ray retracing, we obtain the incident conditions of the ray, such as the position and the angle at the aperture, which provide clues for image formation. To illuminate the image formation, the concept of the equivalent imaging point is proposed, which is the intersection where the incident rays are extended over the space regardless of the refraction and reflection in the waveguide. Since the initial value of the rays arriving at the equivalent imaging point corresponds to that of the rays converging into the imaging point on the inclined surface, the image formation can be visualized by calculating the equivalent imaging point over the entire inclined surface. Then, we can find image characteristics, such as their size and position, and their degree of blur--by analyzing the distribution of the equivalent imaging point--and design the optimized wedge projection system by attaching the prism structure at the aperture. The simulation results show the feasibility of the ray retracing analysis and characterize the numerical relation between the waveguide parameters and the aperture structure for on-axis configuration. The experimental results verify the designed system based on the proposed method.
Higuchi, Kazuhito; Ishii, Ken'ichiro; Ishikawa, Jun; Hiyama, Shigeo
Holographic movies can be seen as a tool to estimate the picture quality of moving holographic images as a step towards holographic television. The authors have previously developed three versions of an experimental holographic movie system, and this paper is a report on an improved version 4 of the system. The new version features a newly-developed projection-type display with a retro-directive beaded-screen, and an automatic film driver unit which moves perforated 35 mm holographic film intermittently with a shutter. A twin diamond-shaped hologram format, which was developed in the earlier version 2, is adopted for the films. The films comprise a series of reconstructed moving holographic images with minimal blurring. The optical arrangement and structure of the version 4 system enable the viewers to watch the film images in an open space, which in turn relieves them of the psychological pressure they felt with the previous three versions, when they had to squint into a narrow window built into a wall on the side of the device.
The growth of common as well as emerging visual display technologies are surveyed. The major inference is that contemporary society is rapidly growing evermore reliant on visual display for a variety of purposes. Because of its unique mission requirements, the National Aeronautics and Space Administration has contributed in an important and specific way to the growth of visual display technology. These contributions are characterized by the use of computer-driven visual displays to provide an enormous amount of information concisely, rapidly and accurately.
Analytical display design for flight tasks conducted under instrument meteorological conditions. [human factors engineering of pilot performance for display device design in instrument landing systems
Hess, R. A.
Paramount to proper utilization of electronic displays is a method for determining pilot-centered display requirements. Display design should be viewed fundamentally as a guidance and control problem which has interactions with the designer's knowledge of human psychomotor activity. From this standpoint, reliable analytical models of human pilots as information processors and controllers can provide valuable insight into the display design process. A relatively straightforward, nearly algorithmic procedure for deriving model-based, pilot-centered display requirements was developed and is presented. The optimal or control theoretic pilot model serves as the backbone of the design methodology, which is specifically directed toward the synthesis of head-down, electronic, cockpit display formats. Some novel applications of the optimal pilot model are discussed. An analytical design example is offered which defines a format for the electronic display to be used in a UH-1H helicopter in a landing approach task involving longitudinal and lateral degrees of freedom.
Fabian, E; Mertz, M; Hofmann, H; Wertheimer, R; Foos, C
The clinical advantages of a scanning laser ophthalmoscope (SLO) and video imaging of fundus pictures are described. Image quality (contrast, depth of field) and imaging possibilities (confocal stop) are assessed. Imaging with different lasers (argon, He-Ne) and changes in imaging rendered possible by confocal alignment of the imaging optics are discussed. Hard copies from video images are still of inferior quality compared to fundus photographs. Methods of direct processing and retrieval of digitally stored SLO video fundus images are illustrated by examples. Modifications for a definitive laser scanning system - in regard to the field of view and the quality of hard copies - are proposed.
Han, Jian; Liu, Juan; Yao, Xincheng; Wang, Yongtian
A compact waveguide display system integrating freeform elements and volume holograms is presented here for the first time. The use of freeform elements can broaden the field of view, which limits the applications of a holographic waveguide. An optimized system can achieve a diagonal field of view of 45° when the thickness of the waveguide planar is 3mm. Freeform-elements in-coupler and the volume holograms out-coupler were designed in detail in our study, and the influence of grating configurations on diffraction efficiency was analyzed thoroughly. The off-axis aberrations were well compensated by the in-coupler and the diffraction efficiency of the optimized waveguide display system could reach 87.57%. With integrated design, stability and reliability of this monochromatic display system were achieved and the alignment of the system was easily controlled by the record of the volume holograms, which makes mass production possible.
Han, Jian; Liu, Juan; Yao, Xincheng; Wang, Yongtian
A compact waveguide display system integrating freeform elements and volume holograms is presented here for the first time. The use of freeform elements can broaden the field of view, which limits the applications of a holographic waveguide. An optimized system can achieve a diagonal field of view of 45° when the thickness of the waveguide planar is 3mm. Freeform-elements in-coupler and the volume holograms out-coupler were designed in detail in our study, and the influence of grating configurations on diffraction efficiency was analyzed thoroughly. The off-axis aberrations were well compensated by the in-coupler and the diffraction efficiency of the optimized waveguide display system could reach 87.57%. With integrated design, stability and reliability of this monochromatic display system were achieved and the alignment of the system was easily controlled by the record of the volume holograms, which makes mass production possible. PMID:25836207
Ellwood, Sutherland C., Jr.
Photonica, Inc. has pioneered the use of magneto-optics and hybrid technologies in visual display systems to create arrays addressing hi-speed, solid-state modulators up to 1K times faster that DMD/DLP, yielding high frame-rate and extremely high net native resolution allowing for full-duplication of right eye and left eye modulators at 1080p, DCI 2K, 4K and other specified resolution requirements. The technology enables high-transmission (brightness) per frame. In one version, each integrated image-engine assembly processes binocular frames simultaneously, employing simultaneous right eye/left eye channels, either polarization-based or "Infitec" color-band based channels, as well as pixel-vector based systems. In another version, a multi-chip, massively parallel signal-processing architecture integrates pixel-signal channels to yield simultaneous binocular frames. This may be combined with on-chip integration. Channels are integrated either through optics elements on-chip or through fiber network or both.
Schneider, Jeffrey C; Ozsecen, Muzaffer Y; Muraoka, Nicholas K; Mancinelli, Chiara; Della Croce, Ugo; Ryan, Colleen M; Bonato, Paolo
Burn contractures are common and difficult to treat. Measuring continuous joint motion would inform the assessment of contracture interventions; however, it is not standard clinical practice. This study examines use of an interactive gaming system to measure continuous joint motion data. To assess the usability of an exoskeleton-based interactive gaming system in the rehabilitation of upper extremity burn contractures. Feasibility study. Eight subjects with a history of burn injury and upper extremity contractures were recruited from the outpatient clinic of a regional inpatient rehabilitation facility. Subjects used an exoskeleton-based interactive gaming system to play 4 different video games. Continuous joint motion data were collected at the shoulder and elbow during game play. Visual analog scale for engagement, difficulty and comfort. Angular range of motion by subject, joint, and game. The study population had an age of 43 ± 16 (mean ± standard deviation) years and total body surface area burned range of 10%-90%. Subjects reported satisfactory levels of enjoyment, comfort, and difficulty. Continuous joint motion data demonstrated variable characteristics by subject, plane of motion, and game. This study demonstrates the feasibility of use of an exoskeleton-based interactive gaming system in the burn population. Future studies are needed that examine the efficacy of tailoring interactive video games to the specific joint impairments of burn survivors. Copyright © 2016 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Roger W Li
Full Text Available UNLABELLED: Abnormal visual experience during a sensitive period of development disrupts neuronal circuitry in the visual cortex and results in abnormal spatial vision or amblyopia. Here we examined whether playing video games can induce plasticity in the visual system of adults with amblyopia. Specifically 20 adults with amblyopia (age 15-61 y; visual acuity: 20/25-20/480, with no manifest ocular disease or nystagmus were recruited and allocated into three intervention groups: action videogame group (n = 10, non-action videogame group (n = 3, and crossover control group (n = 7. Our experiments show that playing video games (both action and non-action games for a short period of time (40-80 h, 2 h/d using the amblyopic eye results in a substantial improvement in a wide range of fundamental visual functions, from low-level to high-level, including visual acuity (33%, positional acuity (16%, spatial attention (37%, and stereopsis (54%. Using a cross-over experimental design (first 20 h: occlusion therapy, and the next 40 h: videogame therapy, we can conclude that the improvement cannot be explained simply by eye patching alone. We quantified the limits and the time course of visual plasticity induced by video-game experience. The recovery in visual acuity that we observed is at least 5-fold faster than would be expected from occlusion therapy in childhood amblyopia. We used positional noise and modelling to reveal the neural mechanisms underlying the visual improvements in terms of decreased spatial distortion (7% and increased processing efficiency (33%. Our study had several limitations: small sample size, lack of randomization, and differences in numbers between groups. A large-scale randomized clinical study is needed to confirm the therapeutic value of video-game treatment in clinical situations. Nonetheless, taken as a pilot study, this work suggests that video-game play may provide important principles for treating amblyopia
Joongheon Kim; Eun-Seok Ryu
This paper presents the quality analysis results of high-definition video streaming in two-tiered camera sensor network applications. In the camera-sensing system, multiple cameras sense visual scenes in their target fields and transmit the video streams via IEEE 802.15.3c multigigabit wireless links. However, the wireless transmission introduces interferences to the other links. This paper analyzes the capacity degradation due to the interference impacts from the camera-sensing nodes to the ...
Rui Sergio Monteiro de Barros
Full Text Available Abstract The right femoral vessels of 80 rats were identified and dissected. External lengths and diameters of femoral arteries and femoral veins were measured using either a microscope or a video magnification system. Findings were correlated to animals’ weights. Mean length was 14.33 mm for both femoral arteries and femoral veins, mean diameter of arteries was 0.65 mm and diameter of veins was 0.81 mm. In our sample, rats’ body weights were only correlated with the diameter of their femoral veins.
Lopes, M. L. [Fermilab
SolCalc is a software suite that computes and displays magnetic fields generated by a three dimensional (3D) solenoid system. Examples of such systems are the Mu2e magnet system and Helical Solenoids for muon cooling systems. SolCalc was originally coded in Matlab, and later upgraded to a compiled version (called MEX) to improve solving speed. Matlab was chosen because its graphical capabilities represent an attractive feature over other computer languages. Solenoid geometries can be created using any text editor or spread sheets and can be displayed dynamically in 3D. Fields are computed from any given list of coordinates. The field distribution on the surfaces of the coils can be displayed as well. SolCalc was benchmarked against a well-known commercial software for speed and accuracy and the results compared favorably.
Krüger, Andreas; Edelmann-Nusser, Jürgen
This study aims at determining the accuracy of a full body inertial measurement system in a real skiing environment in comparison with an optical video based system. Recent studies have shown the use of inertial measurement systems for the determination of kinematical parameters in alpine skiing. However, a quantitative validation of a full body inertial measurement system for the application in alpine skiing is so far not available. For the purpose of this study, a skier performed a test-run equipped with a full body inertial measurement system in combination with a DGPS. In addition, one turn of the test-run was analyzed by an optical video based system. With respect to the analyzed angles, a maximum mean difference of 4.9° was measured. No differences in the measured angles between the inertial measurement system and the combined usage with a DGPS were found. Concerning the determination of the skier's trajectory, an additional system (e.g., DGPS) must be used. As opposed to optical methods, the main advantages of the inertial measurement system are the determination of kinematical parameters without the limitation of restricted capture volume, and small time costs for the measurement preparation and data analysis.
Khalid, Md. Saifuddin; Hossan, Md. Iqbal
The integration of video conferencing systems (VCS) have increased significantly in the classrooms and administrative practices of higher education institutions. The VCSs discussed in the existing literature can be broadly categorized as desktop systems (e.g. Scopia), WebRTC or Real......-Time Communications (e.g. Google Hangout, Adobe Connect, Cisco WebEx, and appear.in), and dedicated (e.g. Polycom). There is a lack of empirical study on usability evaluation of the interactive systems in educational contexts. This study identifies usability errors and measures user satisfaction of a dedicated VCS......) analysis of 12 user responses results below average score. Poststudy system test by the vendor has identified cabling and setup error. Applying SUMI followed by qualitative methods might enrich evaluation outcomes....
Full Text Available A major learning difficulty of Japanese foreign language (JFL learners is the complex composition of two syllabaries, hiragana and katakana, and kanji characters adopted from logographic Chinese ones. As the number of Japanese language learners increases, computer-assisted Japanese language education gradually gains more attention. This study aimed to adopt a Japanese word segmentation system to help JFL learners overcome literacy problems. This study adopted MeCab, a Japanese morphological analyzer and part-of-speech (POS tagger, to segment Japanese texts into separate morphemes by adding spaces and to attach POS tags to each morpheme for beginners. The participants were asked to participate in three experimental activities involvingwatching two Japanese videos with general or segmented Japanese captions and complete the Nielsen’s Attributes of Usability (NAU survey and the After Scenario Questionnaire (ASQ to evaluate the usability of the learning activities. The results of the system evaluation showed that the videos with the segmented captions could increase the participants’ learning motivation and willingness to adopt the word segmentation system to learn Japanese.
Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko
The authors proposed a high-quality and small-capacity lecture-video-file creating system for distance e-learning system. Examining the feature of the lecturing scene, the authors ingeniously employ two kinds of image-capturing equipment having complementary characteristics : one is a digital video camera with a low resolution and a high frame rate, and the other is a digital still camera with a high resolution and a very low frame rate. By managing the two kinds of image-capturing equipment, and by integrating them with image processing, we can produce course materials with the greatly reduced file capacity : the course materials satisfy the requirements both for the temporal resolution to see the lecturer's point-indicating actions and for the high spatial resolution to read the small written letters. As a result of a comparative experiment, the e-lecture using the proposed system was confirmed to be more effective than an ordinary lecture from the viewpoint of educational effect.
Slezak, T.; Wagner, M.; Yeh, Mimi; Ashworth, L.; Nelson, D.; Ow, D.; Branscomb, E.; Carrano, A.
Efforts are underway at numerous sites around the world to construct physical maps of all human chromosomes. These maps will enable researchers to locate, characterize, and eventually understand the genes that control human structure and function. Accomplishing this goal will require a staggering amount of innovation and advancement of biological technology. The volume and complexity of the data already generated requires a sophisticated array of computational support to collect, store, analyze, integrate, and display it in biologically meaningful ways. The Human Genome Center at Livermore has spent the last 6 years constructing a database system to support its physical mapping efforts on human chromosome 19. Our computational support team is composed of experienced computer professionals who share a common pragmatic primary goal of rapidly supplying tools that meet the ever-changing needs of the biologists. Most papers describing computational support of genome research concentrate on mathematical details of key algorithms. However, in this paper we would like to concentrate on the design issues, tradeoffs, and consequences from the point of view of building a complex database system to support leading-edge genomic research. We introduce the topic of physical mapping, discuss the key design issues involved in our databases, and discuss the use of this data by our major tools (DNA fingerprint analysis and overlap computation, contig assembly, map integration, and database browsing.) Given the advantage of hindsight, we discuss what worked, what didn`t, and how we will evolve from here. As early pioneers in this field we hope that our experience may prove useful to others who are now beginning to design and construct similar systems.
Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas
This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.
Bruellmann, D D; Tjaden, H; Schwanecke, U; Barth, P
We propose an augmented reality system for the reliable detection of root canals in video sequences based on a k-nearest neighbor color classification and introduce a simple geometric criterion for teeth. The new software was implemented using C++, Qt, and the image processing library OpenCV. Teeth are detected in video images to restrict the segmentation of the root canal orifices by using a k-nearest neighbor algorithm. The location of the root canal orifices were determined using Euclidean distance-based image segmentation. A set of 126 human teeth with known and verified locations of the root canal orifices was used for evaluation. The software detects root canals orifices for automatic classification of the teeth in video images and stores location and size of the found structures. Overall 287 of 305 root canals were correctly detected. The overall sensitivity was about 94 %. Classification accuracy for molars ranged from 65.0 to 81.2 % and from 85.7 to 96.7 % for premolars. The realized software shows that observations made in anatomical studies can be exploited to automate real-time detection of root canal orifices and tooth classification with a software system. Automatic storage of location, size, and orientation of the found structures with this software can be used for future anatomical studies. Thus, statistical tables with canal locations will be derived, which can improve anatomical knowledge of the teeth to alleviate root canal detection in the future. For this purpose the software is freely available at: http://www.dental-imaging.zahnmedizin.uni-mainz.de/.
Background Violent content in video games evokes many concerns but there is little research concerning its rewarding aspects. It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes to it. We combined functional Magnetic Resonance Imaging (fMRI) with individual affect measures to address the neuronal correlates of violence in a video game. Results Thirteen male German volunteers played a first-person shooter game (Tactical Ops: Assault on Terror) during fMRI measurement. We defined success as eliminating opponents, and failure as being eliminated themselves. Affect was measured directly before and after game play using the Positive and Negative Affect Schedule (PANAS). Failure and success events evoked increased activity in visual cortex but only failure decreased activity in orbitofrontal cortex and caudate nucleus. A negative correlation between negative affect and responses to failure was evident in the right temporal pole (rTP). Conclusions The deactivation of the caudate nucleus during failure is in accordance with its role in reward-prediction error: it occurred whenever subject missed an expected reward (being eliminated rather than eliminating the opponent). We found no indication that violence events were directly rewarding for the players. We addressed subjective evaluations of affect change due to gameplay to study the reward system. Subjects reporting greater negative affect after playing the game had less rTP activity associated with failure. The rTP may therefore be involved in evaluating the failure events in a social context, to regulate the players' mood. PMID:21749711
Full Text Available Abstract Background Violent content in video games evokes many concerns but there is little research concerning its rewarding aspects. It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes to it. We combined functional Magnetic Resonance Imaging (fMRI with individual affect measures to address the neuronal correlates of violence in a video game. Results Thirteen male German volunteers played a first-person shooter game (Tactical Ops: Assault on Terror during fMRI measurement. We defined success as eliminating opponents, and failure as being eliminated themselves. Affect was measured directly before and after game play using the Positive and Negative Affect Schedule (PANAS. Failure and success events evoked increased activity in visual cortex but only failure decreased activity in orbitofrontal cortex and caudate nucleus. A negative correlation between negative affect and responses to failure was evident in the right temporal pole (rTP. Conclusions The deactivation of the caudate nucleus during failure is in accordance with its role in reward-prediction error: it occurred whenever subject missed an expected reward (being eliminated rather than eliminating the opponent. We found no indication that violence events were directly rewarding for the players. We addressed subjective evaluations of affect change due to gameplay to study the reward system. Subjects reporting greater negative affect after playing the game had less rTP activity associated with failure. The rTP may therefore be involved in evaluating the failure events in a social context, to regulate the players' mood.
Mathiak, Krystyna A; Klasen, Martin; Weber, René; Ackermann, Hermann; Shergill, Sukhwinder S; Mathiak, Klaus
Violent content in video games evokes many concerns but there is little research concerning its rewarding aspects. It was demonstrated that playing a video game leads to striatal dopamine release. It is unclear, however, which aspects of the game cause this reward system activation and if violent content contributes to it. We combined functional Magnetic Resonance Imaging (fMRI) with individual affect measures to address the neuronal correlates of violence in a video game. Thirteen male German volunteers played a first-person shooter game (Tactical Ops: Assault on Terror) during fMRI measurement. We defined success as eliminating opponents, and failure as being eliminated themselves. Affect was measured directly before and after game play using the Positive and Negative Affect Schedule (PANAS). Failure and success events evoked increased activity in visual cortex but only failure decreased activity in orbitofrontal cortex and caudate nucleus. A negative correlation between negative affect and responses to failure was evident in the right temporal pole (rTP). The deactivation of the caudate nucleus during failure is in accordance with its role in reward-prediction error: it occurred whenever subject missed an expected reward (being eliminated rather than eliminating the opponent). We found no indication that violence events were directly rewarding for the players. We addressed subjective evaluations of affect change due to gameplay to study the reward system. Subjects reporting greater negative affect after playing the game had less rTP activity associated with failure. The rTP may therefore be involved in evaluating the failure events in a social context, to regulate the players' mood.
For years, both hardware and software engineers have struggled with the acquisition of device information in a flexible and fast perspective, numerous devices cannot have their status quickly tested due to time limitation associated with the travelling to a computer terminal. For instance, in order to test a scintillator status, one has to inject beam into the device and quickly return to a terminal to see the results, this is not only time demanding but extremely inconvenient for the person responsible, it consumes time that would be used in more pressing matters. In this train of thoughts, the proposal of creating an interface to bring a stable, flexible, user friendly and data driven solution to this problem was created. Being the most common operative system for mobile display, the Android API proved to have the best efficient in financing, since it is based on an open source software, and in implementation difficulty since it’s backend development resides in JAVA calls and XML for visual representation...
Full Text Available This paper presents a reconfigurable multi-sensor interface and its readout integrated circuit (ROIC for display-based multi-sensor systems, which builds up multi-sensor functions by utilizing touch screen panels. In addition to inherent touch detection, physiological and environmental sensor interfaces are incorporated. The reconfigurable feature is effectively implemented by proposing two basis readout topologies of amplifier-based and oscillator-based circuits. For noise-immune design against various noises from inherent human-touch operations, an alternate-sampling error-correction scheme is proposed and integrated inside the ROIC, achieving a 12-bit resolution of successive approximation register (SAR of analog-to-digital conversion without additional calibrations. A ROIC prototype that includes the whole proposed functions and data converters was fabricated in a 0.18 μm complementary metal oxide semiconductor (CMOS process, and its feasibility was experimentally verified to support multiple heterogeneous sensing functions of touch, electrocardiogram, body impedance, and environmental sensors.
Full Text Available Three-dimensional (3D display became one of indispensable features of commercial TVs in recent years. However, the 3D content displayed by 3D display may contain the abrupt change of depth when the scene changes, which might be considered as a paranormal stimulus. Because the human visual system is not accustomed to such paranormal stimuli in natural conditions, they can cause unexpected responses which usually induce discomfort. Following the change of depth expressed by 3D display, the eyeballs rotate to match the convergence to the new 3D image position. The amount of rotation varies according to the initial longitudinal location and depth displacement of 3D image. Because the change of depth is abrupt, there is delay in human visual system following the change and such delay can be a source of discomfort. To guarantee the safety in watching 3D TV, the acceptable level of displacement in the longitudinal direction should be revealed quantitatively. Additionally, the artificially generated scenes also can provide paranormal stimuli such as periodic depth variations. In the presentation, we investigate the response of human visual system to such paranormal stimuli given by 3D display system. Using the result of investigation, we can give guideline to creating the 3D content to minimize the discomfort coming from the paranormal stimuli.
Non-intrusive video imaging sensors are commonly used in traffic monitoring : and surveillance. For some applications it is necessary to transmit the video : data over communication links. However, due to increased requirements of : bitrate this mean...
Yaser Mohammad Taheri; Alireza Zolghadr–asli; Mehran Yazdi
Video watermarking is usually considered as watermarking of a set of still images. In frame-by-frame watermarking approach, each video frame is seen as a single watermarked image, so collusion attack is more critical in video watermarking. If the same or redundant watermark is used for embedding in every frame of video, the watermark can be estimated and then removed by watermark estimate remodolulation (WER) attack. Also if uncorrelated watermarks are used for every frame, these watermarks c...
Lee, Joong Ho; Tanaka, Eiji; Woo, Yanghee; Ali, Güner; Son, Taeil; Kim, Hyoung-Il; Hyung, Woo Jin
The recent scientific and technologic advances have profoundly affected the training of surgeons worldwide. We describe a novel intraoperative real-time training module, the Advanced Robotic Multi-display Educational System (ARMES). We created a real-time training module, which can provide a standardized step by step guidance to robotic distal subtotal gastrectomy with D2 lymphadenectomy procedures, ARMES. The short video clips of 20 key steps in the standardized procedure for robotic gastrectomy were created and integrated with TilePro™ software to delivery on da Vinci Surgical Systems (Intuitive Surgical, Sunnyvale, CA). We successfully performed the robotic distal subtotal gastrectomy with D2 lymphadenectomy for patient with gastric cancer employing this new teaching method without any transfer errors or system failures. Using this technique, the total operative time was 197 min and blood loss was 50 mL and there were no intra- or post-operative complications. Our innovative real-time mentoring module, ARMES, enables standardized, systematic guidance during surgical procedures. © 2017 Wiley Periodicals, Inc.
Crampton, Michael C
Full Text Available This study relates to the development of an alkaliphilic, thermotolerant, Gram-positive isolate, Bacillus halodurans Alk36, for the over-production and surface display of chimeric gene products. This bacterium continuously over-produces flagellin...
Nelson, Douglas A; Samosky, Joseph T
Safe and successful performance of medical procedures often requires the correct manual positioning of a tool. For example, during endotracheal intubation a laryngoscope is used to open a passage in the airway through which a breathing tube is inserted. During training it can be challenging for an experienced practitioner to effectively communicate to a novice the correct placement and orientation of a tool. We have implemented a real-time tracking and position display system to enhance learning correct laryngoscope placement. The system displays a 3D model of the laryngoscope. A clinical teacher can correctly position the laryngoscope to open the airway of a full-body simulator, then set this tool pose as the target position. The system displays to the learner the fixed, target pose and a real-time display of the current, "live" laryngoscope position. Positional error metrics are displayed as color-coded visual cues to guide the user toward successful targeting of the reference position. This technique provides quantitative assessment of the degree to which a learner has matched a specified "expert" position with a tool, and is potentially applicable to a wide variety of tools and procedures.
Takahata, Minoru; Uemori, Akira; Nakano, Hirotaka
This video-on-demand service is constructed of distributed servers, including video servers that supply real-time MPEG-1 video & audio, real-time MPEG-1 encoders, and an application server that supplies additional text information and agents for retrieval. This system has three distinctive features that enable it to provide multi viewpoint access to real-time visual information: (1) The terminal application uses an agent-oriented approach that allows the system to be easily extended. The agents are implemented using a commercial authoring tool plus additional objects that communicate with the video servers by using TCP/IP protocols. (2) The application server manages the agents, automatically processes text information and is able to handle unexpected alterations of the contents. (3) The distributed system has an economical, flexible architecture to store long video streams. The real-time MPEG-1 encoder system is based on multi channel phase-shifting processing. We also describe a practical application of this system, a prototype TV-on-demand service called TVOD. This provides access to broadcast television programs for the previous week.
Cox, Malcolm E.; James, Allan; Hawke, Amy; Raiber, Matthias
Valley, and the Surat Basin, a large sedimentary basin of confined artesian aquifers. This latter example required more detail in the hydrostratigraphy, correlation of formations with drillholes and visualisation of simulation piezometric surfaces. Both alluvial system GVS models were developed during drought conditions to support government strategies to implement groundwater management. The Surat Basin model was industry sponsored research, for coal seam gas groundwater management and community information and consultation. The "virtual" groundwater systems in these 3D GVS models can be interactively interrogated by standard functions, plus production of 2D cross-sections, data selection from the 3D scene, rear end database and plot displays. A unique feature is that GVS allows investigation of time-series data across different display modes, both 2D and 3D. GVS has been used successfully as a tool to enhance community/stakeholder understanding and knowledge of groundwater systems and is of value for training and educational purposes. Projects completed confirm that GVS provides a powerful support to management and decision making, and as a tool for interpretation of groundwater system hydrological processes. A highly effective visualisation output is the production of short videos (e.g. 2-5 min) based on sequences of camera 'fly-throughs' and screen images. Further work involves developing support for multi-screen displays and touch-screen technologies, distributed rendering, gestural interaction systems. To highlight the visualisation and animation capability of the GVS software, links to related multimedia hosted online sites are included in the references.
Full Text Available Aims: The aims of this study were (1 to investigate the influence of physical movement on near-infrared spectroscopy (NIRS data, (2 to establish a video-NIRS system which simultaneously records NIRS data and the subject’s movement, and (3 to measure the oxygenated hemoglobin (oxy-Hb concentration change (Δoxy-Hb during a word fluency (WF task. Experiment 1: In 5 healthy volunteers, we measured the oxy-Hb and deoxygenated hemoglobin (deoxy-Hb concentrations during 11 kinds of facial, head, and extremity movements. The probes were set in the bilateral frontal regions. The deoxy-Hb concentration was increased in 85% of the measurements. Experiment 2: Using a pillow on the backrest of the chair, we established the video-NIRS system with data acquisition and video capture software. One hundred and seventy-six elderly people performed the WF task. The deoxy-Hb concentration was decreased in 167 subjects (95%. Experiment 3: Using the video-NIRS system, we measured the Δoxy-Hb, and compared it with the results of the WF task. Δoxy-Hb was significantly correlated with the number of words. Conclusion: Like the blood oxygen level-dependent imaging effect in functional MRI, the deoxy-Hb concentration will decrease if the data correctly reflect the change in neural activity. The video-NIRS system might be useful to collect NIRS data by recording the waveforms and the subject’s appearance simultaneously.