WorldWideScience

Sample records for monocular head-mounted display

  1. Visibility of monocular symbology in transparent head-mounted display applications

    Science.gov (United States)

    Winterbottom, M.; Patterson, R.; Pierce, B.; Gaska, J.; Hadley, S.

    2015-05-01

    With increased reliance on head-mounted displays (HMDs), such as the Joint Helmet Mounted Cueing System and F-35 Helmet Mounted Display System, research concerning visual performance has also increased in importance. Although monocular HMDs have been used successfully for many years, a number of authors have reported significant problems with their use. Certain problems have been attributed to binocular rivalry when differing imagery is presented to the two eyes. With binocular rivalry, the visibility of the images in the two eyes fluctuates, with one eye's view becoming dominant, and thus visible, while the other eye's view is suppressed, which alternates over time. Rivalry is almost certainly created when viewing an occluding monocular HMD. For semi-transparent monocular HMDs, however, much of the scene is binocularly fused, with additional imagery superimposed in one eye. Binocular fusion is thought to prevent rivalry. The present study was designed to investigate differences in visibility between monocularly and binocularly presented symbology at varying levels of contrast and while viewing simulated flight over terrain at various speeds. Visibility was estimated by measuring the presentation time required to identify a test probe (tumbling E) embedded within other static symbology. Results indicated that there were large individual differences, but that performance decreased with decreased test probe contrast under monocular viewing relative to binocular viewing conditions. Rivalry suppression may reduce visibility of semi-transparent monocular HMD imagery. However, factors, such as contrast sensitivity, masking, and conditions such as monofixation, will be important to examine in future research concerning visibility of HMD imagery.

  2. Visual task performance using a monocular see-through head-mounted display (HMD) while walking.

    Science.gov (United States)

    Mustonen, Terhi; Berg, Mikko; Kaistinen, Jyrki; Kawai, Takashi; Häkkinen, Jukka

    2013-12-01

    A monocular see-through head-mounted display (HMD) allows the user to view displayed information while simultaneously interacting with the surrounding environment. This configuration lets people use HMDs while they are moving, such as while walking. However, sharing attention between the display and environment can compromise a person's performance in any ongoing task, and controlling one's gait may add further challenges. In this study, the authors investigated how the requirements of HMD-administered visual tasks altered users' performance while they were walking. Twenty-four university students completed 3 cognitive tasks (high- and low-working memory load, visual vigilance) on an HMD while seated and while simultaneously performing a paced walking task in a controlled environment. The results show that paced walking worsened performance (d', reaction time) in all HMD-administered tasks, but visual vigilance deteriorated more than memory performance. The HMD-administered tasks also worsened walking performance (speed, path overruns) in a manner that varied according to the overall demands of the task. These results suggest that people's ability to process information displayed on an HMD may worsen while they are in motion. Furthermore, the use of an HMD can critically alter a person's natural performance, such as their ability to guide and control their gait. In particular, visual tasks that involve constant monitoring of the HMD should be avoided. These findings highlight the need for careful consideration of the type and difficulty of information that can be presented through HMDs while still letting the user achieve an acceptable overall level of performance in various contexts of use. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  3. The influence of depth of focus on visibility of monocular head-mounted display symbology in simulation and training applications

    Science.gov (United States)

    Winterbottom, Marc D.; Patterson, Robert; Pierce, Byron J.; Covas, Christine; Winner, Jennifer

    2005-05-01

    The Joint Helmet Mounted Cueing System (JHMCS),is being considered for integration into the F-15, F-16, and F-18 aircraft. If this integration occurs, similar monocular head-mounted displays (HMDs) will need to be integrated with existing out-the-window simulator systems for training purposes. One such system is the Mobile Modular Display for Advanced Research and Training (M2DART), which is constructed with flat-panel rear-projection screens around a nominal eye-point. Because the panels are flat, the distance from the eye point to the display screen varies depending upon the location on the screen to which the observer is directing fixation. Variation in focal distance may create visibility problems for either the HMD symbology or the out-the-window imagery presented on the simulator rear-projection display screen because observers may not be able to focus both sets of images simultaneously. The extent to which blurring occurs will depend upon the difference between the focal planes of the simulator display and HMD as well as the depth of focus of the observer. In our psychophysical study, we investigated whether significant blurring occurs as a result of such differences in focal distances and established an optimal focal distance for an HMD which would minimize blurring for a range of focal distances representative of the M2DART. Our data suggest that blurring of symbology due to differing focal planes is not a significant issue within the range of distances tested and that the optimal focal distance for an HMD is the optical midpoint between the near and far rear-projection screen distances.

  4. Three-dimensional holographic display using active shutter for head mounted display application

    Science.gov (United States)

    Kim, Hyun-Eui; Kim, Nam; Song, Hoon; Lee, Hong-Seok; Park, Jae-Hyeung

    2011-03-01

    Three-dimensional holographic system using active shutters for head mounted display application is proposed. Conventional three-dimensional head mounted display suffers from eye-fatigue since it only provides binocular disparity, not monocular depth cues like accommodation. The proposed method presents two holograms of a 3D scene to corresponding eyes using active shutters. Since a holography delivered to each eye has full three-dimensional information, not only the binocular depth cues but also monocular depth cues are presented, eliminating eye-fatigue. The application to the head mounted display also greatly relaxes the viewing angle requirement that is one of the main issues of the conventional holographic displays. In presentation, the proposed optical system will be explained in detail with experimental results.

  5. Creating Gaze Annotations in Head Mounted Displays

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Qvarfordt, Pernilla

    2015-01-01

    , the user simply captures an image using the HMD’s camera, looks at an object of interest in the image, and speaks out the information to be associated with the object. The gaze location is recorded and visualized with a marker. The voice is transcribed using speech recognition. Gaze annotations can......To facilitate distributed communication in mobile settings, we developed GazeNote for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annota- tion...

  6. Maintenance Procedure Display: Head Mounted Display (HMD) Evaluations

    Science.gov (United States)

    Whitmore, Milrian; Litaker, Harry L., Jr.; Solem, Jody A.; Holden, Kritina L.; Hoffman, Ronald R.

    2007-01-01

    A viewgraph presentation describing maintenance procedures for head mounted displays is shown. The topics include: 1) Study Goals; 2) Near Eye Displays (HMDs); 3) Design; 4) Phase I-Evaluation Methods; 5) Phase 1 Results; 6) Improved HMD Mounting; 7) Phase 2 -Evaluation Methods; 8) Phase 2 Preliminary Results; and 9) Next Steps.

  7. Designing a Vibrotactile Head-mounted Display.

    Science.gov (United States)

    de Jesus Oliveira, Victor; Brayda, Luca; Nedel, Luciana; Maciel, Anderson

    2017-01-23

    Due to the perceptual characteristics of the head, vibrotactile Head-mounted Displays are built with low actuator density. Therefore, vibrotactile guidance is mostly assessed by pointing towards objects in the azimuthal plane. When it comes to multisensory interaction in 3D environments, it is also important to convey information about objects in the elevation plane. In this paper, we design and assess a haptic guidance technique for 3D environments. First, we explore the modulation of vibration frequency to indicate the position of objects in the elevation plane. Then, we assessed a vibrotactile HMD made to render the position of objects in a 3D space around the subject by varying both stimulus loci and vibration frequency. Results have shown that frequencies modulated with a quadratic growth function allowed a more accurate, precise, and faster target localization in an active head pointing task. The technique presented high usability and a strong learning effect for a haptic search across different scenarios in an immersive VR setup.

  8. Differential effects of head-mounted displays on visual performance.

    Science.gov (United States)

    Schega, Lutz; Hamacher, Daniel; Erfuth, Sandra; Behrens-Baumann, Wolfgang; Reupsch, Juliane; Hoffmann, Michael B

    2014-01-01

    Head-mounted displays (HMDs) virtually augment the visual world to aid visual task completion. Three types of HMDs were compared [look around (LA); optical see-through with organic light emitting diodes and virtual retinal display] to determine whether LA, leaving the observer functionally monocular, is inferior. Response times and error rates were determined for a combined visual search and Go-NoGo task. The costs of switching between displays were assessed separately. Finally, HMD effects on basic visual functions were quantified. Effects of HMDs on visual search and Go-NoGo task were small, but for LA display-switching costs for the Go-NoGo-task the effects were pronounced. Basic visual functions were most affected for LA (reduced visual acuity and visual field sensitivity, inaccurate vergence movements and absent stereo-vision). LA involved comparatively high switching costs for the Go-NoGo task, which might indicate reduced processing of external control cues. Reduced basic visual functions are a likely cause of this effect.

  9. A Novel Approach to Surgical Instructions for Scrub Nurses by Using See-Through-Type Head-Mounted Display.

    Science.gov (United States)

    Yoshida, Soichiro; Sasaki, Asami; Sato, Chikage; Yamazaki, Mutsuko; Takayasu, Junya; Tanaka, Naofumi; Okabayashi, Norie; Hirano, Hiromi; Saito, Kazutaka; Fujii, Yasuhisa; Kihara, Kazunori

    2015-08-01

    In order to facilitate assists in surgical procedure, it is important for scrub nurses to understand the operation procedure and to share the operation status with attending surgeons. The potential utility of head-mounted display as a new imaging monitor has been proposed in the medical field. This study prospectively evaluated the usefulness of see-through-type head-mounted display as a novel intraoperative instructional tool for scrub nurses. From January to March 2014, scrub nurses who attended gasless laparoendoscopic single-port radical nephrectomy and radical prostatectomy wore the monocular see-through-type head-mounted display (AiRScouter; Brother Industries Ltd, Nagoya, Japan) displaying the instruction of the operation procedure through a crystal panel in front of the eye. Following the operation, the participants completed an anonymous questionnaire, which evaluated the image quality of the head-mounted display, the helpfulness of the head-mounted display to understand the operation procedure, and adverse effects caused by the head-mounted display. Fifteen nurses were eligible for the analysis. The intraoperative use of the head-mounted display could help scrub nurses to understand the surgical procedure and to hand out the instruments for the operation with no major head-mounted-display wear-related adverse event. This novel approach to support scrub nurses will help facilitate technical and nontechnical skills during surgery.

  10. Parallax error in the monocular head-mounted eye trackers

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Witzner Hansen, Dan

    2012-01-01

    This paper investigates the parallax error, which is a common problem of many video-based monocular mobile gaze trackers. The parallax error is defined and described using the epipolar geometry in a stereo camera setup. The main parameters that change the error are introduced and it is shown how...

  11. Psychometric Assessment of Stereoscopic Head-Mounted Displays

    Science.gov (United States)

    2016-06-29

    Journal Article 3. DATES COVERED (From – To) Jan 2015 - Dec 2015 4. TITLE AND SUBTITLE PSYCHOMETRIC ASSESSMENT OF STEREOSCOPIC HEAD- MOUNTED DISPLAYS...disparity. This paper details the psychometric validation of the stereoscopic rendering of a virtual environment using game-based simulation software...mounted display, near eye display, stereo display, stereo HMD, psychometric assessment, stereoscopic performance, eye-limited stereo vision. 16

  12. Head Mounted Displays for Virtual Reality

    Science.gov (United States)

    1993-02-01

    image generating capabilities. 2.2.4 Color Since people are used to viewing the world in color, a full-color display would add to the realism of the...Crescent Street, Waltham, MA, undated. Lipton, L., Foundations of the Stereoscopic Cinema , New York: Van Nostrand Rheinhold Company, 1982. Martin, S. W

  13. Interactive stereoscopy optimization for head-mounted displays

    NARCIS (Netherlands)

    Min, P.; Jense, G.J.

    1994-01-01

    In current virtual environment systems, the stereoscopic images presented in a head-mounted display are far from optimal. The aim is to achieve orthostereoscopy, which roughly means images should `behave as in real life.' A theoretical model of stereoscopic optics was used to implement a test and op

  14. Head-mounted projective displays for creating distributed collaborative environments

    Science.gov (United States)

    Rolland, Jannick P.; Davis, Larry; Ha, Yonggang; Hamza-Lup, Felix G.; Del Vento, Benjamin; Gao, Chunyu; Hua, Hong; Biocca, Frank

    2002-08-01

    In this paper, we shall present an overview of research in augmented reality technology and applications conducted in collaboration with the 3DVIS Lab and the MIND Lab. We present research in the technology of head-mounted projective displays and tracking probes. We then review mathematical methods developed for augmented reality. Finally we discuss applications in medical augmented reality and point to current developments in distributed 3d collaborative environments.

  15. Head-mounted display system for surgical visualization

    Science.gov (United States)

    Schmidt, Greg W.; Osborn, Dale B.

    1995-05-01

    Recent advances in high resolution color spatial light modulators and light weight optics together with application specific integrated circuits enable true stereoscopic visualization on a head mounted display (HMD). The development of precision stereo displays with comfortable long duration wear characteristics are critically dependent on incorporating key human factors into the HMD design. In this paper we discuss the development of a VGA format (640 X 480 pixel) full color video and graphics stereo display. Its primary application is as an element of an integrated, interactive visualization system for surgeons performing minimally invasive surgery and endoscopic laser surgery. Additional uses include endoscopic and open surgical training and rehearsal. The high resolution, wide field of view displays combined with true stereoscopic imagery enhances the surgeon's visualization, ability to navigate and manipulate in the surgical field.

  16. Immersive BCI with SSVEP in VR head-mounted display.

    Science.gov (United States)

    Bonkon Koo; Hwan-Gon Lee; Yunjun Nam; Seungjin Choi

    2015-08-01

    In this paper we present an immersive brain computer interface (BCI) where we use a virtual reality head-mounted display (VRHMD) to invoke SSVEP responses. Compared to visual stimuli in monitor display, we demonstrate that visual stimuli in VRHMD indeed improve the user engagement for BCI. To this end, we validate our method with experiments on a VR maze game, the goal of which is to guide a ball into the destination in a 2D grid map in a 3D space, successively choosing one of four neighboring cells using SSVEP evoked by visual stimuli on neighboring cells. Experiments indicate that the averaged information transfer rate is improved by 10% for VRHMD, compared to the case in monitor display and the users feel easier to play the game with the proposed system.

  17. Gaze contingent hologram synthesis for holographic head-mounted display

    Science.gov (United States)

    Hong, Jisoo; Kim, Youngmin; Hong, Sunghee; Shin, Choonsung; Kang, Hoonjong

    2016-03-01

    Development of display and its related technologies provides immersive visual experience with head-mounted-display (HMD). However, most available HMDs provide 3D perception only by stereopsis, lack of accommodation depth cues. Recently, holographic HMD (HHMD) arises as one viable option to resolve this problem because hologram is known to provide full set of depth cues including accommodation. Moreover, by virtue of increasing computational power, hologram synthesis from 3D object represented by point cloud can be calculated in real time even with rigorous Rayleigh-Sommerfeld diffraction formula. However, in HMD, rapid gaze change of the user requires much faster refresh rate, which means that much faster hologram synthesis is indispensable in HHMD. Because the visual acuity falls off in the visual periphery, we propose here to accelerate synthesizing hologram by differentiating density of point cloud projected on the screen. We classify the screen into multiple layers which are concentric circles with different radii, where the center is aligned with gaze of user. Layer with smaller radius is closer to the region of interest, hence, assigned with higher density of point cloud. Because the computation time is directly related to the number of points in point cloud, we can accelerate synthesizing hologram by lowering density of point cloud in the visual periphery. Cognitive study reveals that user cannot discriminate those degradation in the visual periphery if the parameters are properly designed. Prototype HHMD system will be provided for verifying the feasibility of our method, and detailed design scheme will be discussed.

  18. The use of head-mounted display eyeglasses for teaching surgical skills: A prospective randomised study.

    Science.gov (United States)

    Peden, Robert G; Mercer, Rachel; Tatham, Andrew J

    2016-10-01

    To investigate whether 'surgeon's eye view' videos provided via head-mounted displays can improve skill acquisition and satisfaction in basic surgical training compared with conventional wet-lab teaching. A prospective randomised study of 14 medical students with no prior suturing experience, randomised to 3 groups: 1) conventional teaching; 2) head-mounted display-assisted teaching and 3) head-mounted display self-learning. All were instructed in interrupted suturing followed by 15 minutes' practice. Head-mounted displays provided a 'surgeon's eye view' video demonstrating the technique, available during practice. Subsequently students undertook a practical assessment, where suturing was videoed and graded by masked assessors using a 10-point surgical skill score (1 = very poor technique, 10 = very good technique). Students completed a questionnaire assessing confidence and satisfaction. Suturing ability after teaching was similar between groups (P = 0.229, Kruskal-Wallis test). Median surgical skill scores were 7.5 (range 6-10), 6 (range 3-8) and 7 (range 1-7) following head-mounted display-assisted teaching, conventional teaching, and head-mounted display self-learning respectively. There was good agreement between graders regarding surgical skill scores (rho.c = 0.599, r = 0.603), and no difference in number of sutures placed between groups (P = 0.120). The head-mounted display-assisted teaching group reported greater enjoyment than those attending conventional teaching (P = 0.033). Head-mounted display self-learning was regarded as least useful (7.4 vs 9.0 for conventional teaching, P = 0.021), but more enjoyable than conventional teaching (9.6 vs 8.0, P = 0.050). Teaching augmented with head-mounted displays was significantly more enjoyable than conventional teaching. Students undertaking self-directed learning using head-mounted displays with pre-recorded videos had comparable skill acquisition to those attending traditional wet

  19. Design for an Improved Head-Mounted Display System

    Science.gov (United States)

    2007-11-02

    weight ratio and ease of manufacturing. As shown, for most structural parts, Polyetherimide (PEI or ULTEM ®) is chosen. ULTEM ® keeps its hardness and...Exploded view of the monocular opto-mechanical module. Frame ( Ultem ) Heatsink (Al) Front Housing ( Ultem ) Prism Housing ( Ultem ) Lens Barrel ( Ultem ...and right side with knobs (Al) Crossed Roller Slide (steel) Capture/Mount for Opto- mechanical assembly ( Ultem ) IPD plate ( Ultem ) Total Weight of the

  20. Sensor and Display Human Factors Based Design Constraints for Head Mounted and Tele-Operation Systems

    Directory of Open Access Journals (Sweden)

    Ralph Etienne-Cummings

    2011-01-01

    Full Text Available For mobile imaging systems in head mounted displays and tele-operation systems it is important to maximize the amount of visual information transmitted to the human visual system without exceeding its input capacity. This paper aims to describe the design constraints on the imager and display systems of head mounted devices and tele-operated systems based upon the capabilities of the human visual system. We also present the experimental results of methods to improve the amount of visual information conveyed to a user when trying to display a high dynamic range image on a low dynamic range display.

  1. Compact three-dimensional head-mounted display system with Savart plate.

    Science.gov (United States)

    Lee, Chang-Kun; Moon, Seokil; Lee, Seungjae; Yoo, Dongheon; Hong, Jong-Young; Lee, Byoungho

    2016-08-22

    We propose three-dimensional (3D) head-mounted display (HMD) providing multi-focal and wearable functions by using polarization-dependent optical path switching in Savart plate. The multi-focal function is implemented as micro display with high pixel density of 1666 pixels per inches is optically duplicated in longitudinal direction according to the polarization state. The combination of micro display, fast switching polarization rotator and Savart plate retains small form factor suitable for wearable function. The optical aberrations of duplicated panels are investigated by ray tracing according to both wavelength and polarization state. Astigmatism and lateral chromatic aberration of extraordinary wave are compensated by modification of the Savart plate and sub-pixel shifting method, respectively. To verify the feasibility of the proposed system, a prototype of the HMD module for monocular eye is implemented. The module has the compact size of 40 mm by 90 mm by 40 mm and the weight of 131 g with wearable function. The micro display and polarization rotator are synchronized in real-time as 30 Hz and two focal planes are formed at 640 and 900 mm away from eye box, respectively. In experiments, the prototype also provides augmented reality function by combining the optically duplicated panels with a beam splitter. The multi-focal function of the optically duplicated panels without astigmatism and color dispersion compensation is verified. When light field optimization for two additive layers is performed, perspective images are observed, and the integration of real world scene and high quality 3D images is confirmed.

  2. Recent advances in head-mounted light field displays for virtual and augmented reality (Conference Presentation)

    Science.gov (United States)

    Hua, Hong

    2017-02-01

    Head-mounted light field displays render a true 3D scene by sampling either the projections of the 3D scene at different depths or the directions of the light rays apparently emitted by the 3D scene and viewed from different eye positions. They are capable of rendering correct or nearly correct focus cues and addressing the very well-known vergence-accommodation mismatch problem in conventional virtual and augmented reality displays. In this talk, I will focus on reviewing recent advancements of head-mounted light field displays for VR and AR applications. I will demonstrate examples of HMD systems developed in my group.

  3. Use cases and usability challenges for head-mounted displays in healthcare

    Directory of Open Access Journals (Sweden)

    Mentler T.

    2015-09-01

    Full Text Available In the healthcare domain, head-mounted displays (HMDs with augmented reality (AR modalities have been reconsidered for application as a result of commercially available products and the needs for using computers in mobile context. Within a user-centered design approach, interviews were conducted with physicians, nursing staff and members of emergency medical services. Additionally practitioners were involved in evaluating two different head-mounted displays. Based on these measures, use cases and usability considerations according to interaction design and information visualization were derived and are described in this contribution.

  4. Conformal Light Augmented Single Substrate Head-Mounted Display Project

    Data.gov (United States)

    National Aeronautics and Space Administration — To address the NASA Exploration Systems Mission Directorate (ESMD) need for space suit displays and processing cores, Physical Optics Corporation (POC) proposes to...

  5. Effect of Oculus Rift head mounted display on postural stability

    DEFF Research Database (Denmark)

    Epure, Paula; Gheorghe, Cristina; Nissen, Thomas;

    2016-01-01

    This study explored how a HMD-experienced virtual environment influences physical balance of six balance-impaired adults 59-69 years-of-age, when compared to a control group of eight non-balance-impaired adults, 18-28 years-of-age. The setup included a Microsoft Kinect and a bespoke balance board...... controlling a Virtual Reality skiing game. Two tests were conducted: full-vision versus blindfolded and HMD versus monitor display. Results indicate five of the six balance-impaired adults and six of the eight non-balance-impaired adults showed higher degree of postural stability while using a monitor display...

  6. Wearable Laser Pointer Versus Head-mounted Display for Tele-guidance Applications?

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Pederson, Thomas; Houben, Steven

    2014-01-01

    alternatives to Head-Mounted Displays for indicating where in the physical environment the local agent should direct her/his attention. The potential benefit of the laser pointer would be reduced eye fatigue, due to the fact that the documented refocusing challenges associated with HMD use would be completely...

  7. Immersive Eating: Evaluating the Use of Head-Mounted Displays for Mixed Reality Meal sessions

    DEFF Research Database (Denmark)

    Korsgaard, Dannie Michael; Nilsson, Niels Chr.; Bjørner, Thomas

    2017-01-01

    This paper documents a pilot study evaluating a simple approach allowing users to eat real food while exploring a virtual environment (VE) through a head-mounted display (HMD). Two cameras mounted on the HMD allowed for video-based stereoscopic see-through when the user’s head orientation pointed...

  8. Wearable Laser Pointer Versus Head-mounted Display for Tele-guidance Applications?

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Pederson, Thomas; Houben, Steven

    2014-01-01

    alternatives to Head-Mounted Displays for indicating where in the physical environment the local agent should direct her/his attention. The potential benefit of the laser pointer would be reduced eye fatigue, due to the fact that the documented refocusing challenges associated with HMD use would be completely...

  9. "Head up and eyes out" advances in head mounted displays capabilities

    Science.gov (United States)

    Cameron, Alex

    2013-06-01

    There are a host of helmet and head mounted displays, flooding the market place with displays which provide what is essentially a mobile computer display. What sets aviators HMDs apart is that they provide the user with accurate conformal information embedded in the pilots real world view (see through display) where the information presented is intuitive and easy to use because it overlays the real world (mix of sensor imagery, symbolic information and synthetic imagery) and enables them to stay head up, eyes out, - improving their effectiveness, reducing workload and improving safety. Such systems are an enabling technology in the provision of enhanced Situation Awareness (SA) and reducing user workload in high intensity situations. Safety Is Key; so the addition of these HMD functions cannot detract from the aircrew protection functions of conventional aircrew helmets which also include life support and audio communications. These capabilities are finding much wider application in new types of compact man mounted audio/visual products enabled by the emergence of new families of micro displays, novel optical concepts and ultra-compact low power processing solutions. This papers attempts to capture the key drivers and needs for future head mounted systems for aviation applications.

  10. In the blink of an eye: head mounted displays development within BAE Systems

    Science.gov (United States)

    Cameron, Alex

    2015-05-01

    There has been an explosion of interest in head worn displays in recent years, particularly for consumer applications with an attendant ramping up of investment into key enabling technologies to provide what is essence a mobile computer display. However, head mounted system have been around for over 40 years and today's consumer products are building on a legacy of knowledge and technology created by companies such as BAE Systems who have been designing and fielding helmet mounted displays (HMD) for a wide range of specialist applications. Although the dominant application area has been military aviation, solutions have been fielded for solider, ground vehicle, simulation, medical, racing car and even subsea navigation applications. What sets these HMDs apart is that they provide the user with accurate conformal information embedded in the users real world view where the information presented is intuitive and easy to use because it overlays the real world and enables them to stay head up, eyes out, - improving their effectiveness, reducing workload and improving safety. Such systems are an enabling technology in the provision of enhanced Situation Awareness (SA) and reducing user workload in high intensity situations. These capabilities are finding much wider application in new types of compact man mounted audio/visual products enabled by the emergence of new families of micro displays, novel optical concepts and ultra-compact low power processing solutions. This paper therefore provides a personal summary of BAE Systems 40 year's journey in developing and fielding Head Mounted systems, their applications.

  11. A depth-based head-mounted visual display to aid navigation in partially sighted individuals.

    Directory of Open Access Journals (Sweden)

    Stephen L Hicks

    Full Text Available Independent navigation for blind individuals can be extremely difficult due to the inability to recognise and avoid obstacles. Assistive techniques such as white canes, guide dogs, and sensory substitution provide a degree of situational awareness by relying on touch or hearing but as yet there are no techniques that attempt to make use of any residual vision that the individual is likely to retain. Residual vision can restricted to the awareness of the orientation of a light source, and hence any information presented on a wearable display would have to limited and unambiguous. For improved situational awareness, i.e. for the detection of obstacles, displaying the size and position of nearby objects, rather than including finer surface details may be sufficient. To test whether a depth-based display could be used to navigate a small obstacle course, we built a real-time head-mounted display with a depth camera and software to detect the distance to nearby objects. Distance was represented as brightness on a low-resolution display positioned close to the eyes without the benefit focussing optics. A set of sighted participants were monitored as they learned to use this display to navigate the course. All were able to do so, and time and velocity rapidly improved with practise with no increase in the number of collisions. In a second experiment a cohort of severely sight-impaired individuals of varying aetiologies performed a search task using a similar low-resolution head-mounted display. The majority of participants were able to use the display to respond to objects in their central and peripheral fields at a similar rate to sighted controls. We conclude that the skill to use a depth-based display for obstacle avoidance can be rapidly acquired and the simplified nature of the display may appropriate for the development of an aid for sight-impaired individuals.

  12. Optical methods for enabling focus cues in head-mounted displays for virtual and augmented reality

    Science.gov (United States)

    Hua, Hong

    2017-05-01

    Developing head-mounted displays (HMD) that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. Among the many challenges, minimizing visual discomfort is one of the key obstacles. One of the key contributing factors to visual discomfort is the lack of the ability to render proper focus cues in HMDs to stimulate natural eye accommodation responses, which leads to the well-known accommodation-convergence cue discrepancy problem. In this paper, I will provide a summary on the various optical methods approaches toward enabling focus cues in HMDs for both virtual reality (VR) and augmented reality (AR).

  13. Continued Testing of Head-Mounted Displays for Deaf Education in a Planetarium

    Science.gov (United States)

    Hintz, Eric G.; Jones, M.; Lawler, J.; Bench, N.; Mangrubang, F. R.

    2013-06-01

    For more than a year now we have been developing techniques for using Head-Mounted Displays (HMD) to help accommodate a deaf audience in a planetarium environment. Our target audience is primarily children from 8 to 13 years of age, but the methodologies can be used for a wide variety of audiences. Applications also extend beyond the planetarium environment. Three tests have been done to determine if American Sign Language (ASL) can be delivered to the HMD and the student view both the planetarium show and the ASL ‘sound track’. From those early results we are now at the point of testing for comprehension improvement on a number of astronomical subjects. We will present a number of these early results.

  14. Optical gesture sensing and depth mapping technologies for head-mounted displays: an overview

    Science.gov (United States)

    Kress, Bernard; Lee, Johnny

    2013-05-01

    Head Mounted Displays (HMDs), and especially see-through HMDs have gained renewed interest in recent time, and for the first time outside the traditional military and defense realm, due to several high profile consumer electronics companies presenting their products to hit market. Consumer electronics HMDs have quite different requirements and constrains as their military counterparts. Voice comments are the de-facto interface for such devices, but when the voice recognition does not work (not connection to the cloud for example), trackpad and gesture sensing technologies have to be used to communicate information to the device. We review in this paper the various technologies developed today integrating optical gesture sensing in a small footprint, as well as the various related 3d depth mapping sensors.

  15. Medical Screen Operations: How Head-Mounted Displays Transform Action and Perception in Surgical Practice

    Directory of Open Access Journals (Sweden)

    Moritz Queisner

    2016-09-01

    Full Text Available Based on case studies in minimally invasive surgery, this paper investigates how head-mounted displays (HMDs transform action and perception in the operating theatre. In particular, it discusses the methods and addresses the obstacles that are linked to the attempt to eliminate the divide between vision and visualization by augmenting the surgeon’s field of view with images. First, it analyzes how HMDs change the way images are integrated into the surgical workflow by looking at the modalities of image production, transmission, and reception in HMDs. Second, through an analysis of screen architecture and design, it examines how HMDs affect the locations and situations in which images are used. And third, it discuss the consequences of HMD-based practice as applied to action, perception, and decision-making, with attention to how HMDs challenge the existing techniques and routines of surgical practice and, therefore, necessitate a new type of image and application-based expertise.

  16. A 3D integral imaging optical see-through head-mounted display.

    Science.gov (United States)

    Hua, Hong; Javidi, Bahram

    2014-06-02

    An optical see-through head-mounted display (OST-HMD), which enables optical superposition of digital information onto the direct view of the physical world and maintains see-through vision to the real world, is a vital component in an augmented reality (AR) system. A key limitation of the state-of-the-art OST-HMD technology is the well-known accommodation-convergence mismatch problem caused by the fact that the image source in most of the existing AR displays is a 2D flat surface located at a fixed distance from the eye. In this paper, we present an innovative approach to OST-HMD designs by combining the recent advancement of freeform optical technology and microscopic integral imaging (micro-InI) method. A micro-InI unit creates a 3D image source for HMD viewing optics, instead of a typical 2D display surface, by reconstructing a miniature 3D scene from a large number of perspective images of the scene. By taking advantage of the emerging freeform optical technology, our approach will result in compact, lightweight, goggle-style AR display that is potentially less vulnerable to the accommodation-convergence discrepancy problem and visual fatigue. A proof-of-concept prototype system is demonstrated, which offers a goggle-like compact form factor, non-obstructive see-through field of view, and true 3D virtual display.

  17. Immersive Collaborative Analysis of Network Connectivity: CAVE-style or Head-Mounted Display?

    Science.gov (United States)

    Cordeil, Maxime; Dwyer, Tim; Klein, Karsten; Laha, Bireswar; Marriott, Kim; Thomas, Bruce H

    2017-01-01

    High-quality immersive display technologies are becoming mainstream with the release of head-mounted displays (HMDs) such as the Oculus Rift. These devices potentially represent an affordable alternative to the more traditional, centralised CAVE-style immersive environments. One driver for the development of CAVE-style immersive environments has been collaborative sense-making. Despite this, there has been little research on the effectiveness of collaborative visualisation in CAVE-style facilities, especially with respect to abstract data visualisation tasks. Indeed, very few studies have focused on the use of these displays to explore and analyse abstract data such as networks and there have been no formal user studies investigating collaborative visualisation of abstract data in immersive environments. In this paper we present the results of the first such study. It explores the relative merits of HMD and CAVE-style immersive environments for collaborative analysis of network connectivity, a common and important task involving abstract data. We find significant differences between the two conditions in task completion time and the physical movements of the participants within the space: participants using the HMD were faster while the CAVE2 condition introduced an asymmetry in movement between collaborators. Otherwise, affordances for collaborative data analysis offered by the low-cost HMD condition were not found to be different for accuracy and communication with the CAVE2. These results are notable, given that the latest HMDs will soon be accessible (in terms of cost and potentially ubiquity) to a massive audience.

  18. The Effect of Head Mounted Display Weight and Locomotion Method on the Perceived Naturalness of Virtual Walking Speeds

    DEFF Research Database (Denmark)

    Nilsson, Niels Chr.; Serafin, Stefania; Nordahl, Rolf

    This poster details a study investigating the effect of Head Mounted Display (HMD) weight and locomotion method (Walking-In-Place and treadmill walking) on the perceived naturalness of virtual walking speeds. The results revealed significant main effects of movement type, but no significant effects...

  19. The Effect of Head Mounted Display Weight and Locomotion Method on the Perceived Naturalness of Virtual Walking Speeds

    DEFF Research Database (Denmark)

    Nilsson, Niels Chr.; Serafin, Stefania; Nordahl, Rolf

    This poster details a study investigating the effect of Head Mounted Display (HMD) weight and locomotion method (Walking-In-Place and treadmill walking) on the perceived naturalness of virtual walking speeds. The results revealed significant main effects of movement type, but no significant effects...

  20. Virtual reality exposure treatment of agoraphobia: a comparison of computer automatic virtual environment and head-mounted display

    NARCIS (Netherlands)

    Meyerbröker, K.; Morina, N.; Kerkhof, G.; Emmelkamp, P.M.G.; Wiederhold, B.K.; Bouchard, S.; Riva, G.

    2011-01-01

    In this study the effects of virtual reality exposure therapy (VRET) were investigated in patients with panic disorder and agoraphobia. The level of presence in VRET was compared between using either a head-mounted display (HMD) or a computer automatic virtual environment (CAVE). Results indicate

  1. Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays.

    Science.gov (United States)

    Itoh, Yuta; Klinker, Gudrun

    2015-04-01

    A critical requirement for AR applications with Optical See-Through Head-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user - more particularly, according to the user's eye position. Recently-proposed interaction-free calibration methods [16], [17] automatically estimate this projection by tracking the user's eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors - the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.

  2. New head-mounted display system applied to endoscopic management of upper urinary tract carcinomas

    Directory of Open Access Journals (Sweden)

    Junichiro Ishioka

    2014-12-01

    Full Text Available Purpose We tested a new head-mounted display (HMD system for surgery on the upper urinary tract. Surgical Technique Four women and one man with abnormal findings in the renal pelvis on computed tomography and magnetic resonance imaging underwent surgery using this new system. A high definition HMD (Sony, Tokyo, Japan is connected to a flexible ureteroscope (Olympus, Tokyo, Japan and the images from the ureteroscope are delivered simultaneously to various participants wearing HMDs. Furthermore, various information in addition to that available through the endoscope, such as the narrow band image, the fluoroscope, input from a video camera mounted on the lead surgeon’s HMD and the vital monitors can be viewed on each HMD. Results Median operative duration and anesthesia time were 53 and 111 minutes, respectively. The ureteroscopic procedures were successfully performed in all cases. There were no notable negative outcomes or incidents (Clavien-Dindo grade ≥1. Conclusion The HMD system offers simultaneous, high-quality magnified imagery in front of the eyes, regardless of head position, to those participating in the endoscopic procedures. This affordable display system also provides various forms of information related to examinations and operations while allowing direct vision and navigated vision.

  3. Markerless Augmented Reality via Stereo Video See-Through Head-Mounted Display Device

    Directory of Open Access Journals (Sweden)

    Chung-Hung Hsieh

    2015-01-01

    Full Text Available Conventionally, the camera localization for augmented reality (AR relies on detecting a known pattern within the captured images. In this study, a markerless AR scheme has been designed based on a Stereo Video See-Through Head-Mounted Display (HMD device. The proposed markerless AR scheme can be utilized for medical applications such as training, telementoring, or preoperative explanation. Firstly, a virtual model for AR visualization is aligned to the target in physical space by an improved Iterative Closest Point (ICP based surface registration algorithm, with the target surface structure reconstructed by a stereo camera pair; then, a markerless AR camera localization method is designed based on the Kanade-Lucas-Tomasi (KLT feature tracking algorithm and the Random Sample Consensus (RANSAC correction algorithm. Our AR camera localization method is shown to be better than the traditional marker-based and sensor-based AR environment. The demonstration system was evaluated with a plastic dummy head and the display result is satisfactory for a multiple-view observation.

  4. Photosensor-Based Latency Measurement System for Head-Mounted Displays

    Directory of Open Access Journals (Sweden)

    Min-Woo Seo

    2017-05-01

    Full Text Available In this paper, a photosensor-based latency measurement system for head-mounted displays (HMDs is proposed. The motion-to-photon latency is the greatest reason for motion sickness and dizziness felt by users when wearing an HMD system. Therefore, a measurement system is required to accurately measure and analyze the latency to reduce these problems. The existing measurement system does not consider the actual physical movement in humans, and its accuracy is also very low. However, the proposed system considers the physical head movement and is highly accurate. Specifically, it consists of a head position model-based rotary platform, pixel luminance change detector, and signal analysis and calculation modules. Using these modules, the proposed system can exactly measure the latency, which is the time difference between the physical movement for a user and the luminance change of an output image. In the experiment using a commercial HMD, the latency was measured to be up to 47.05 ms. In addition, the measured latency increased up to 381.17 ms when increasing the rendering workload in the HMD.

  5. Recognition of American Sign Language (ASL) Classifiers in a Planetarium Using a Head-Mounted Display

    Science.gov (United States)

    Hintz, Eric G.; Jones, Michael; Lawler, Jeannette; Bench, Nathan

    2015-01-01

    A traditional accommodation for the deaf or hard-of-hearing in a planetarium show is some type of captioning system or a signer on the floor. Both of these have significant drawbacks given the nature of a planetarium show. Young audience members who are deaf likely don't have the reading skills needed to make a captioning system effective. A signer on the floor requires light which can then splash onto the dome. We have examined the potential of using a Head-Mounted Display (HMD) to provide an American Sign Language (ASL) translation. Our preliminary test used a canned planetarium show with a pre-recorded sound track. Since many astronomical objects don't have official ASL signs, the signer had to use classifiers to describe the different objects. Since these are not official signs, these classifiers provided a way to test to see if students were picking up the information using the HMD.We will present results that demonstrate that the use of HMDs is at least as effective as projecting a signer on the dome. This also showed that the HMD could provide the necessary accommodation for students for whom captioning was ineffective. We will also discuss the current effort to provide a live signer without the light splash effect and our early results on teaching effectiveness with HMDs.This work is partially supported by funding from the National Science Foundation grant IIS-1124548 and the Sorenson Foundation.

  6. Head-Mounted Display Screens: A (DeConstruction of Sense-Certainty?

    Directory of Open Access Journals (Sweden)

    Michael Friedman

    2016-09-01

    Full Text Available This essay addresses the philosophical and epistemological aspects of spatialization of the field of vision as manifested in two of the most cutting-edge innovations in the field of head-mounted displays (HMDs: Birdly and EyeSect. The notion of space in both installations is problematized. Birdly re-enacts the familiar ways of envisaging the Cartesian model of space: a point of origin is determined and, hence, the construction of space itself with its three redefined dimensions is established—the (anchor point of certainty is given by the apparatus itself. With EyeSect, however, the field of vision disintegrates. This paper asserts that there is no fixed point of origin from which space can be constituted. Using a Deleuzian analysis of “nomad sciences,” this point is exposed—through the unconstrained operation of the cameras in perpetual motion—as an imaginary, unstable point of reference. While Birdly recreates a unified space through an apparatus that affords certainty, EyeSect disintegrates both the body, as the point of origin, and the “natural” perception of space, suggesting that a complete integration of these two notions is illusory at best.

  7. Contribution of TopOwl head mounted display system in degraded visual environments

    Science.gov (United States)

    Lemoine, Olivier; François, Jean-Michel; Point, Pascal

    2013-05-01

    Piloting a rotorcraft in a Degraded Visual Environment (DVE) is a very complex task, and the evolution of the rotorcraft missions tend to augment the probability of such degraded flight conditions (increase of night flights, all-weather flights, with brownout or whiteout phenomena…). When the direct view of the external situation is degraded, the avionic system can be of great help for the crew to recover the lost visual references. TopOwl® Head Mounted Sight and Display (HMSD) system is particularly adapted to such situations, allowing the pilot to remain "eyes-out" while visualizing on a large field of view different information: a conformal enhanced image (EVS) coming from an on-board sensor, various 2D and 3D symbologies (flight, navigation, mission specific symbols), a conformal synthetic representation of the terrain (SVS), a night vision image coming from the integrated Image Intensifier Tubes, or a combination of these data, depending on the external conditions and the phase of flight, according to the pilot's choice.

  8. Multisensory integration with a head-mounted display: role of mental and manual load.

    Science.gov (United States)

    Thompson, Matthew B; Tear, Morgan J; Sanderson, Penelope M

    2010-02-01

    The aim of this study was to replicate the finding that multisensory integration with a head-mounted display (HMD) is particularly difficult when a person is walking and hearing sound from a free-field speaker, and to extend the finding with a response method intended to reduce workload. HMDs can support the information needs of workers whose work requires mobility, but some low-cost solutions for delivering auditory information may be less effective than others. For the study, 24 participants detected whether shapes moving on the HMD screen made a sound appropriate to their forms when they collided with other shapes. Independent variables were self-motion (participants were mobile or seated), sound delivery (free-field speakers or an earpiece), and response method (noting mismatches via a mental count or via a manual clicker). Unexpectedly, overall mismatch task accuracy was worse with the clicker (p = .027) than without. Participants also reported that it was harder to time-share the mismatch task with clicker responses (p = .033). In the clicker condition, self-motion and sound delivery interacted but in the opposite direction to the previous study. The best way of delivering auditory information to mobile workers performing a multisensory integration task with an HMD may depend on whether responding involves mental load or manual load. Broader theories are needed to capture factors influencing performance. Until more powerful theory is developed, designers should perform careful formative and summative tests of whether the activities to be performed by mobile HMD wearers will make some sound delivery solutions less effective than others.

  9. Rapid P300 brain-computer interface communication with a head-mounted display

    Directory of Open Access Journals (Sweden)

    Ivo eKäthner

    2015-06-01

    Full Text Available Visual ERP (P300 based brain-computer interfaces (BCIs allow for fast and reliable spelling and are intended as a muscle-independent communication channel for people with severe paralysis. However, they require the presentation of visual stimuli in the field of view of the user. A head mounted display could allow convenient presentation of visual stimuli in situations, where mounting a conventional monitor might be difficult or not feasible (e.g. at a patient’s bedside. To explore if similar accuracies can be achieved with a virtual reality (VR headset compared to a conventional flat screen monitor, we conducted an experiment with 18 healthy participants. We also evaluated it with a person in the locked-in state (LIS to verify that usage of the headset is possible for a severely paralyzed person. Healthy participants performed online spelling with three different display methods. In one condition a 5x5 letter matrix was presented on a conventional 22 inch TFT monitor. Two configurations of the VR headset were tested. In the first (glasses A, the same 5x5 matrix filled the field of view of the user. In the second (glasses B, single letters of the matrix filled the field of view of the user. The participant in the LIS tested the VR headset on 3 different occasions (glasses A condition only. For healthy participants, average online spelling accuracies were 94% (15.5 bits/min using three flash sequences for spelling with the monitor and glasses A and 96% (16.2 bits/min with glasses B. In one session, the participant in the LIS reached an online spelling accuracy of 100% (10 bits/min using the glasses A condition. We also demonstrated that spelling with one flash sequence is possible with the VR headset for healthy users (mean: 32.1 bits/min, maximum reached by one user: 71.89 bits/min at 100% accuracy. We conclude that the VR headset allows for rapid P300 BCI communication in healthy users and may be a suitable display option for severely

  10. Effects of Videogame Distraction using a Virtual Reality Type Head-Mounted Display Helmet on Cold Pressor Pain in Children

    OpenAIRE

    Dahlquist, Lynnda M.; Weiss, Karen E.; Dillinger Clendaniel, Lindsay; Law, Emily F.; Ackerman, Claire Sonntag; McKenna, Kristine D.

    2008-01-01

    Objective To test whether a head-mounted display helmet enhances the effectiveness of videogame distraction for children experiencing cold pressor pain. Method Forty-one children, aged 6–14 years, underwent one or two baseline cold pressor trials followed by two distraction trials in which they played the same videogame with and without the helmet in counterbalanced order. Pain threshold (elapsed time until the child reported pain) and pain tolerance (total time the child kept the hand submer...

  11. Amblyopia treatment of adults with dichoptic training using the virtual reality oculus rift head mounted display: preliminary results.

    Science.gov (United States)

    Žiak, Peter; Holm, Anders; Halička, Juraj; Mojžiš, Peter; Piñero, David P

    2017-06-28

    The gold standard treatments in amblyopia are penalizing therapies, such as patching or blurring vision with atropine that are aimed at forcing the use of the amblyopic eye. However, in the last years, new therapies are being developed and validated, such as dichoptic visual training, aimed at stimulating the amblyopic eye and eliminating the interocular supression. To evaluate the effect of dichoptic visual training using a virtual reality head mounted display in a sample of anisometropic amblyopic adults and to evaluate the potential usefulness of this option of treatment. A total of 17 subjects (10 men, 7 women) with a mean age of 31.2 years (range, 17-69 year) and anisometropic amblyopia were enrolled. Best corrected visual acuity (BCVA) and stereoacuity (Stereo Randot graded circle test) changes were evaluated after 8 sessions (40 min per session) of dichoptic training with the computer game Diplopia Game (Vivid Vision) run in the Oculus Rift OC DK2 virtual reality head mounted display (Oculus VR). Mean BCVA in amblyopic eye improved significantly from a logMAR value of 0.58 ± 0.35 before training to a post-training value of 0.43 ± 0.38 (p virtual reality head mounted display seems to be an effective option of treatment in adults with anisometropic amblyopia. Future clinical trials are needed to confirm this preliminary evidence. Trial ID: ISRCTN62086471 . Date registered: 13/06/2017. Retrospectively registered.

  12. Development of a surgical navigation system based on augmented reality using an optical see-through head-mounted display.

    Science.gov (United States)

    Chen, Xiaojun; Xu, Lu; Wang, Yiping; Wang, Huixiang; Wang, Fang; Zeng, Xiangsen; Wang, Qiugen; Egger, Jan

    2015-06-01

    The surgical navigation system has experienced tremendous development over the past decades for minimizing the risks and improving the precision of the surgery. Nowadays, Augmented Reality (AR)-based surgical navigation is a promising technology for clinical applications. In the AR system, virtual and actual reality are mixed, offering real-time, high-quality visualization of an extensive variety of information to the users (Moussa et al., 2012) [1]. For example, virtual anatomical structures such as soft tissues, blood vessels and nerves can be integrated with the real-world scenario in real time. In this study, an AR-based surgical navigation system (AR-SNS) is developed using an optical see-through HMD (head-mounted display), aiming at improving the safety and reliability of the surgery. With the use of this system, including the calibration of instruments, registration, and the calibration of HMD, the 3D virtual critical anatomical structures in the head-mounted display are aligned with the actual structures of patient in real-world scenario during the intra-operative motion tracking process. The accuracy verification experiment demonstrated that the mean distance and angular errors were respectively 0.809±0.05mm and 1.038°±0.05°, which was sufficient to meet the clinical requirements. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Application of virtual reality head mounted display for investigation of movement: a novel effect of orientation of attention

    Science.gov (United States)

    Quinlivan, Brendan; Butler, John S.; Beiser, Ines; Williams, Laura; McGovern, Eavan; O'Riordan, Sean; Hutchinson, Michael; Reilly, Richard B.

    2016-10-01

    Objective. To date human kinematics research has relied on video processing, motion capture and magnetic search coil data acquisition techniques. However, the use of head mounted display virtual reality systems, as a novel research tool, could facilitate novel studies into human movement and movement disorders. These systems have the unique ability of presenting immersive 3D stimulus while also allowing participants to make ecologically valid movement-based responses. Approach. We employed one such system (Oculus Rift DK2) in this study to present visual stimulus and acquire head-turn data from a cohort of 40 healthy adults. Participants were asked to complete head movements towards eccentrically located visual targets following valid and invalid cues. Such tasks are commonly employed for investigating the effects orientation of attention and are known as Posner cueing paradigms. Electrooculography was also recorded for a subset of 18 participants. Main results. A delay was observed in onset of head movement and saccade onset during invalid trials, both at the group and single participant level. We found that participants initiated head turns 57.4 ms earlier during valid trials. A strong relationship between saccade onset and head movement onset was also observed during valid trials. Significance. This work represents the first time that the Posner cueing effect has been observed in onset of head movement in humans. The results presented here highlight the role of head-mounted display systems as a novel and practical research tool for investigations of normal and abnormal movement patterns.

  14. LIBS system with compact fiber spectrometer, head mounted spectra display and hand held eye-safe erbium glass laser gun

    Science.gov (United States)

    Myers, Michael J.; Myers, John D.; Sarracino, John T.; Hardy, Christopher R.; Guo, Baoping; Christian, Sean M.; Myers, Jeffrey A.; Roth, Franziska; Myers, Abbey G.

    2010-02-01

    LIBS (Laser Induced Breakdown Spectroscopy) systems are capable of real-time chemical analysis with little or no sample preparation. A Q-switched laser is configured such that laser induced plasma is produced on targeted material. Chemical element line spectra are created, collected and analyzed by a fiber spectrometer. Line spectra emission data is instantly viewed on a head mounted display. "Eye-safe" Class I erbium glass lasers provide for insitu LIBS applications without the need for eye-protection goggles. This is due to the fact that Megawatt peak power Q-switched lasers operating in the narrow spectral window between 1.5um and 1.6um are approximately 8000 times more "eye-safe" than other laser devices operating in the UV, visible and near infrared. In this work we construct and demonstrate a LIBS system that includes a hand held eye-safe laser gun. The laser gun is fitted with a micro-integrating sphere in-situ target interface and is designed to facilitate chemical analysis in remote locations. The laser power supply, battery pack, computer controller and spectrophotometer components are packaged into a utility belt. A head mounted display is employed for "hands free" viewing of the emitted line spectra. The system demonstrates that instant qualitative and semi-quantitative chemical analyses may be performed in remote locations utilizing lightweight commercially available system components ergonomically fitted to the operator.

  15. 3D optical see-through head-mounted display based augmented reality system and its application

    Science.gov (United States)

    Zhang, Zhenliang; Weng, Dongdong; Liu, Yue; Xiang, Li

    2015-07-01

    The combination of health and entertainment becomes possible due to the development of wearable augmented reality equipment and corresponding application software. In this paper, we implemented a fast calibration extended from SPAAM for an optical see-through head-mounted display (OSTHMD) which was made in our lab. During the calibration, the tracking and recognition techniques upon natural targets were used, and the spatial corresponding points had been set in dispersed and well-distributed positions. We evaluated the precision of this calibration, in which the view angle ranged from 0 degree to 70 degrees. Relying on the results above, we calculated the position of human eyes relative to the world coordinate system and rendered 3D objects in real time with arbitrary complexity on OSTHMD, which accurately matched the real world. Finally, we gave the degree of satisfaction about our device in the combination of entertainment and prevention of cervical vertebra diseases through user feedbacks.

  16. Talk to the virtual hands: self-animated avatars improve communication in head-mounted display virtual environments.

    Directory of Open Access Journals (Sweden)

    Trevor J Dodds

    Full Text Available BACKGROUND: When we talk to one another face-to-face, body gestures accompany our speech. Motion tracking technology enables us to include body gestures in avatar-mediated communication, by mapping one's movements onto one's own 3D avatar in real time, so the avatar is self-animated. We conducted two experiments to investigate (a whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and (b whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other. PRINCIPAL FINDINGS: In experiment 1, participants used significantly more hand gestures and successfully described significantly more words when nonverbal communication was available to both participants (i.e. both describing and guessing avatars were self-animated, compared with both avatars in a static neutral pose. Participants 'passed' (gave up describing significantly more words when they were talking to a static avatar (no nonverbal feedback available. In experiment 2, participants' performance was significantly worse when they were talking to an avatar with a prerecorded listening animation, compared with an avatar animated by their partners' real movements. In both experiments participants used significantly more hand gestures when they played the game in the real world. CONCLUSIONS: Taken together, the studies show how (a virtual reality can be used to systematically study the influence of body gestures; (b it is important that nonverbal communication is bidirectional (real nonverbal feedback in addition to nonverbal communication from the describing participant; and (c there are differences in the amount of body gestures that participants use with and without the head-mounted display, and we discuss possible explanations for this and ideas for future investigation.

  17. Usability Comparisons of Head-Mounted vs. Stereoscopic Desktop Displays in a Virtual Reality Environment with Pain Patients.

    Science.gov (United States)

    Tong, Xin; Gromala, Diane; Gupta, Dimple; Squire, Pam

    2016-01-01

    Researchers have shown that immersive Virtual Reality (VR) can serve as an unusually powerful pain control technique. However, research assessing the reported symptoms and negative effects of VR systems indicate that it is important to ascertain if these symptoms arise from the use of particular VR display devices, particularly for users who are deemed "at risk," such as chronic pain patients Moreover, these patients have specific and often complex needs and requirements, and because basic issues such as 'comfort' may trigger anxiety or panic attacks, it is important to examine basic questions of the feasibility of using VR displays. Therefore, this repeated-measured experiment was conducted with two VR displays: the Oculus Rift's head-mounted display (HMD) and Firsthand Technologies' immersive desktop display, DeepStream3D. The characteristics of these immersive desktop displays differ: one is worn, enabling patients to move their heads, while the other is peered into, allowing less head movement. To assess the severity of physical discomforts, 20 chronic pain patients tried both displays while watching a VR pain management demo in clinical settings. Results indicated that participants experienced higher levels of Simulator Sickness using the Oculus Rift HMD. However, results also indicated other preferences of the two VR displays among patients, including physical comfort levels and a sense of immersion. Few studies have been conducted that compare usability of specific VR devices specifically with chronic pain patients using a therapeutic virtual environment in pain clinics. Thus, the results may help clinicians and researchers to choose the most appropriate VR displays for chronic pain patients and guide VR designers to enhance the usability of VR displays for long-term pain management interventions.

  18. Visibility of Monocular Symbology in Transparent Head-Mounted Display Applications

    Science.gov (United States)

    2015-07-08

    over terrain, were selected: 0.0 (low), 0.3 (medium), and 3.0 (high) eye-heights/second (0, 36, 324 kph). These ego -motion speeds correspond to no...i.e. thresholds worsen more than might be predicted by lack of summation alone). However, ego -motion does not appear to increase this suppression...E., Levi, D., Harwerth, R. & White, J. Color vision is altered during the suppression phase of binocular rivalry. Science (80-. ). 218, 802–804

  19. Geometrical waveguide in see-through head-mounted display: a review

    Science.gov (United States)

    Hou, Qichao; Wang, Qiwei; Cheng, Dewen; Wang, Yongtian

    2016-10-01

    Geometrical waveguide has obvious advantage over other see-through technologies to achieve high resolution, ultra-thin thickness, light weight and full-color display. The general principle of waveguide display is introduced and the key challenges involved with geometrical waveguide display and the way to conquer them is discussed. Ultra-thin geometrical waveguide for see-through HMDs with different properties is reviewed in this paper, including waveguide with partially-reflective mirrors array (PRMA), trapezoidal microstructures and triangular microstructures. Finally, a type of ultra-thin waveguide display which can be fabricated with the technology of injection molding is presented, and the thickness can be reduced to less than 2mm with an EPD of 12mm and a FOV of 36°.

  20. A head-mounted display-based personal integrated-image monitoring system for transurethral resection of the prostate.

    Science.gov (United States)

    Yoshida, Soichiro; Kihara, Kazunori; Takeshita, Hideki; Fujii, Yasuhisa

    2014-12-01

    The head-mounted display (HMD) is a new image monitoring system. We developed the Personal Integrated-image Monitoring System (PIM System) using the HMD (HMZ-T2, Sony Corporation, Tokyo, Japan) in combination with video splitters and multiplexers as a surgical guide system for transurethral resection of the prostate (TURP). The imaging information obtained from the cystoscope, the transurethral ultrasonography (TRUS), the video camera attached to the HMD, and the patient's vital signs monitor were split and integrated by the PIM System and a composite image was displayed by the HMD using a four-split screen technique. Wearing the HMD, the lead surgeon and the assistant could simultaneously and continuously monitor the same information displayed by the HMD in an ergonomically efficient posture. Each participant could independently rearrange the images comprising the composite image depending on the engaging step. Two benign prostatic hyperplasia (BPH) patients underwent TURP performed by surgeons guided with this system. In both cases, the TURP procedure was successfully performed, and their postoperative clinical courses had no remarkable unfavorable events. During the procedure, none of the participants experienced any HMD-wear related adverse effects or reported any discomfort.

  1. A comparison of head-mounted and hand-held displays for 360° videos with focus on attitude and behavior change

    DEFF Research Database (Denmark)

    Fonseca, Diana; Kraus, Martin

    2016-01-01

    a between-group design experiment that compares two systems with different levels of immersion and two types of narratives, one with and one without emotional content. In the immersive video (IV) condition (high immersion), 21 participants used a Head-Mounted Display (HMD) to watch an emotional 360° video...

  2. Augmented reality 3D display using head-mounted projectors and transparent retro-reflective screen

    Science.gov (United States)

    Soomro, Shoaib R.; Urey, Hakan

    2017-02-01

    A 3D augmented reality display is proposed that can provide glass-free stereo parallax using a highly transparent projection screen. The proposed display is based on a transparent retro-reflective screen and a pair of laser pico projectors placed close to the viewer's head. The retro-reflective screen directs incident light towards its source with little scattering so that each of the viewer's eyes only perceives the content projected by the associated projector. Each projector displays one of the two components (left or right channel) of stereo content. The retro-reflective nature of screen provides high brightness compared to the regular diffused screens. The partially patterned retro-reflective material on clear substrate introduces optical transparency and facilitates the viewer to see the real-world scene on the other side of screen. The working principle and design of the proposed see-through 3D display are presented. A tabletop prototype consisting of an in-house fabricated 60×40cm2 see-through retro-reflective screen and a pair of 30 lumen pico-projectors with custom 3D printed housings is demonstrated. Geometric calibration between projectors and optimal viewing conditions (eye box size, eye-to-projector distance) are discussed. The display performance is evaluated by measuring the brightness and crosstalk for each eye. The screen provides high brightness (up to 300 cd/m2 per eye) using 30 lumens mobile projectors while maintaining the 75% screen transparency. The crosstalk between left and right views is measured as <10% at the optimum distance of 125-175 cm, which is within acceptable range.

  3. Real-time EO/IR sensor fusion on a portable computer and head-mounted display

    Science.gov (United States)

    Yue, Zhanfeng; Topiwala, Pankaj

    2007-04-01

    Multi-sensor platforms are widely used in surveillance video systems for both military and civilian applications. The complimentary nature of different types of sensors (e.g. EO and IR sensors) makes it possible to observe the scene under almost any condition (day/night/fog/smoke). In this paper, we propose an innovative EO/IR sensor registration and fusion algorithm which runs real-time on a portable computing unit with head-mounted display. The EO/IR sensor suite is mounted on a helmet for a dismounted soldier and the fused scene is shown in the goggle display upon the processing on a portable computing unit. The linear homography transformation between images from the two sensors is precomputed for the mid-to-far scene, which reduces the computational cost for the online calibration of the sensors. The system is implemented in a highly optimized C++ code, with MMX/SSE, and performing a real-time registration. The experimental results on real captured video show the system works very well both in speed and in performance.

  4. The accuracy of the Oculus Rift virtual reality head-mounted display during cervical spine mobility measurement.

    Science.gov (United States)

    Xu, Xu; Chen, Karen B; Lin, Jia-Hua; Radwin, Robert G

    2015-02-26

    An inertial sensor-embedded virtual reality (VR) head-mounted display, the Oculus Rift (the Rift), monitors head movement so the content displayed can be updated accordingly. While the Rift may have potential use in cervical spine biomechanics studies, its accuracy in terms of cervical spine mobility measurement has not yet been validated. In the current study, a VR environment was designed to guide participants to perform prescribed neck movements. The cervical spine kinematics was measured by both the Rift and a reference motion tracking system. Comparison of the kinematics data between the Rift and the tracking system indicated that the Rift can provide good estimates on full range of motion (from one side to the other side) during the performed task. Because of inertial sensor drifting, the unilateral range of motion (from one side to neutral posture) derived from the Rift is more erroneous. The root-mean-square errors over a 1-min task were within 10° for each rotation axis. The error analysis further indicated that the inertial sensor drifted approximately 6° at the beginning of a trial during the initialization. This needs to be addressed when using the Rift in order to more accurately measure cervical spine kinematics. It is suggested that the front cover of the Rift should be aligned against a vertical plane during its initialization.

  5. Assessing balance through the use of a low-cost head-mounted display in older adults: a pilot study.

    Science.gov (United States)

    Saldana, Santiago J; Marsh, Anthony P; Rejeski, W Jack; Haberl, Jack K; Wu, Peggy; Rosenthal, Scott; Ip, Edward H

    2017-01-01

    As the population ages, the prevention of falls is an increasingly important public health problem. Balance assessment forms an important component of fall-prevention programs for older adults. The recent development of cost-effective and highly responsive virtual reality (VR) systems means new methods of balance assessment are feasible in a clinical setting. This proof-of-concept study made use of the submillimeter tracking built into modern VR head-mounted displays (VRHMDs) to assess balance through the use of visual-vestibular conflict. The objective of this study was to evaluate the validity, acceptability, and reliability of using a VRHMD to assess balance in older adults. Validity was assessed by comparing measurements from the VRHMD to measurements of postural sway from a force plate. Acceptability was assessed through the use of the Simulator Sickness Questionnaire pre- and postexposure to assess possible side effects of the visual-vestibular conflict. Reliability was assessed by measuring correlations between repeated measurements 1 week apart. Variables of possible importance that were found to be reliable (r≥0.9) between tests separated by a week were then tested for differences compared to a control group. Assessment was performed as a cross-sectional single-site community center-based study in 13 older adults (≥65 years old, 80.2±7.3 years old, 77% female, five at risk of falls, eight controls). The VR balance assessment consisted of four modules: a baseline module, a reaction module, a balance module, and a seated assessment. There was a significant difference in the rate at which participants with a risk of falls changed their tilt in the anteroposterior direction compared to the control group. Participants with a risk of falls changed their tilt in the anteroposterior direction at 0.7°/second vs 0.4°/second for those without a history of falls. No significant differences were found between pre/postassessment for oculomotor score or total

  6. Visual Stability of Objects and Environments Viewed through Head-Mounted Displays

    Science.gov (United States)

    Ellis, Stephen R.; Adelstein, Bernard D.

    2015-01-01

    Virtual Environments (aka Virtual Reality) is again catching the public imagination and a number of startups (e.g. Oculus) and even not-so-startup companies (e.g. Microsoft) are trying to develop display systems to capitalize on this renewed interest. All acknowledge that this time they will get it right by providing the required dynamic fidelity, visual quality, and interesting content for the concept of VR to take off and change the world in ways it failed to do so in past incarnations. Some of the surprisingly long historical background of the technology that the form of direct simulation that underlies virtual environment and augmented reality displays will be briefly reviewed. An example of a mid 1990's augmented reality display system with good dynamic performance from our lab will be used to illustrate some of the underlying phenomena and technology concerning visual stability of virtual environments and objects during movement. In conclusion some idealized performance characteristics for a reference system will be proposed. Interestingly, many systems more or less on the market now may actually meet many of these proposed technical requirements. This observation leads to the conclusion that the current success of the IT firms trying to commercialize the technology will depend on the hidden costs of using the systems as well as the development of interesting and compelling content.

  7. Gaussian Light Field: Estimation of Viewpoint-Dependent Blur for Optical See-Through Head-Mounted Displays.

    Science.gov (United States)

    Itoh, Yuta; Amano, Toshiyuki; Iwai, Daisuke; Klinker, Gudrun

    2016-11-01

    We propose a method to calibrate viewpoint-dependent, channel-wise image blur of near-eye displays, especially of Optical See-Through Head-Mounted Displays (OST-HMDs). Imperfections in HMD optics cause channel-wise image shift and blur that degrade the image quality of the display at a user's viewpoint. If we can estimate such characteristics perfectly, we could mitigate the effect by applying correction techniques from the computational photography in computer vision as analogous to cameras. Unfortunately, directly applying existing calibration techniques of cameras to OST-HMDs is not a straightforward task. Unlike ordinary imaging systems, image blur in OST-HMDs is viewpoint-dependent, i.e., the optical characteristic of a display dynamically changes depending on the current viewpoint of the user. This constraint makes the problem challenging since we must measure image blur of an HMD, ideally, over the entire 3D eyebox in which a user can see an image. To overcome this problem, we model the viewpoint-dependent blur as a Gaussian Light Field (GLF) that stores spatial information of the display screen as a (4D) light field with depth information and the blur as point-spread functions in the form of Gaussian kernels, respectively. We first describe both our GLF model and a calibration procedure to learn a GLF for a given OST-HMD. We then apply our calibration method to two HMDs that use different optics: a cubic prism or holographic gratings. The results show that our method achieves significantly better accuracy in Point-Spread Function (PSF) estimations with an accuracy about 2 to 7 dB in Peak SNR.

  8. Wireless communication technology as applied to head mounted display for a tactical fighter pilot

    Science.gov (United States)

    Saini, Gurdial S.

    2007-04-01

    The use of Helmet-Mounted Display/Tracker (HMD/Ts) is becoming widespread for air-to-air, within visual range target acquisition for a tactical fighter pilot. HMD/Ts provide the aircrew with a significant amount of information on the helmet, which reduces the burden of the aircrew from having to continually look down in the cockpit to receive information. HMD/Ts allow the aircrew to receive flight and targeting information regardless of line-of-sight, which should increase the aircrew's situation awareness and mission effectiveness. Current technology requires that a pilot wearing a Helmet Mounted Display/Tracker be connected to the aircraft with a cable. The design of this cable is complex, costly, and its use can decrease system reliability. Most of the problems associated with the use of cable can be alleviated by using wireless transmission for all signals. This will significantly reduce or eliminate the requirements of the interconnect cable/connector reducing system complexity, and cost, and enhancing system safety. A number of wireless communication technologies have been discussed in this paper and the rationale for selecting one particular technology for this application has been shown. The problems with this implementation and the direction of the future effort are outlined.

  9. A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom.

    Science.gov (United States)

    Cutolo, Fabrizio; Meola, Antonio; Carbone, Marina; Sinceri, Sara; Cagnazzo, Federico; Denaro, Ennio; Esposito, Nicola; Ferrari, Mauro; Ferrari, Vincenzo

    2017-12-01

    Benefits of minimally invasive neurosurgery mandate the development of ergonomic paradigms for neuronavigation. Augmented Reality (AR) systems can overcome the shortcomings of commercial neuronavigators. The aim of this work is to apply a novel AR system, based on a head-mounted stereoscopic video see-through display, as an aid in complex neurological lesion targeting. Effectiveness was investigated on a newly designed patient-specific head mannequin featuring an anatomically realistic brain phantom with embedded synthetically created tumors and eloquent areas. A two-phase evaluation process was adopted in a simulated small tumor resection adjacent to Broca's area. Phase I involved nine subjects without neurosurgical training in performing spatial judgment tasks. In Phase II, three surgeons were involved in assessing the effectiveness of the AR-neuronavigator in performing brain tumor targeting on a patient-specific head phantom. Phase I revealed the ability of the AR scene to evoke depth perception under different visualization modalities. Phase II confirmed the potentialities of the AR-neuronavigator in aiding the determination of the optimal surgical access to the surgical target. The AR-neuronavigator is intuitive, easy-to-use, and provides three-dimensional augmented information in a perceptually-correct way. The system proved to be effective in guiding skin incision, craniotomy, and lesion targeting. The preliminary results encourage a structured study to prove clinical effectiveness. Moreover, our testing platform might be used to facilitate training in brain tumour resection procedures.

  10. Towards intelligent environments: an augmented reality-brain-machine interface operated with a see-through head-mount display

    Directory of Open Access Journals (Sweden)

    Kouji eTakano

    2011-04-01

    Full Text Available The brain-machine interface (BMI or brain-computer interface (BCI is a new interface technology that uses neurophysiological signals from the brain to control external machines or computers. This technology is expected to support daily activities, especially for persons with disabilities. To expand the range of activities enabled by this type of interface, here, we added augmented reality (AR to a P300-based BMI. In this new system, we used a see-through head-mount display (HMD to create control panels with flicker visual stimuli to support the user in areas close to controllable devices. When the attached camera detects an AR marker, the position and orientation of the marker are calculated, and the control panel for the pre-assigned appliance is created by the AR system and superimposed on the HMD. The participants were required to control system-compatible devices, and they successfully operated them without significant training. Online performance with the HMD was not different from that using an LCD monitor. Posterior and lateral (right or left channel selections contributed to operation of the AR-BMI with both the HMD and LCD monitor. Our results indicate that AR-BMI systems operated with a see-through HMD may be useful in building advanced intelligent environments.

  11. Assessing balance through the use of a low-cost head-mounted display in older adults: a pilot study

    Directory of Open Access Journals (Sweden)

    Saldana SJ

    2017-08-01

    Full Text Available Santiago J Saldana,1 Anthony P Marsh,2 W Jack Rejeski,2 Jack K Haberl,2 Peggy Wu,3 Scott Rosenthal,4 Edward H Ip1 1Department of Biostatistical Sciences, Wake Forest School of Medicine, 2Department of Health and Exercise Science, Wake Forest University, Winston-Salem, NC, 3Research and Development, Smart Information Flow Technologies, Minneapolis, MN, 4Wake Forest School of Medicine, Winston-Salem, NC, USA Introduction: As the population ages, the prevention of falls is an increasingly important public health problem. Balance assessment forms an important component of fall-prevention programs for older adults. The recent development of cost-effective and highly responsive virtual reality (VR systems means new methods of balance assessment are feasible in a clinical setting. This proof-of-concept study made use of the submillimeter tracking built into modern VR head-mounted displays (VRHMDs to assess balance through the use of visual–vestibular conflict. The objective of this study was to evaluate the validity, acceptability, and reliability of using a VRHMD to assess balance in older adults.Materials and methods: Validity was assessed by comparing measurements from the VRHMD to measurements of postural sway from a force plate. Acceptability was assessed through the use of the Simulator Sickness Questionnaire pre- and postexposure to assess possible side effects of the visual–vestibular conflict. Reliability was assessed by measuring correlations between repeated measurements 1 week apart. Variables of possible importance that were found to be reliable (r≥0.9 between tests separated by a week were then tested for differences compared to a control group. Assessment was performed as a cross-sectional single-site community center-based study in 13 older adults (≥65 years old, 80.2±7.3 years old, 77% female, five at risk of falls, eight controls. The VR balance assessment consisted of four modules: a baseline module, a reaction module, a

  12. Effects on visual functions during tasks of object handling in virtual environment with a head mounted display.

    Science.gov (United States)

    Kawara, T; Ohmi, M; Yoshizawa, T

    1996-11-01

    This study examined the effects on visual functions of a prolonged handling task within the helmet-mounted display environment. Both version eye movement and accommodative response became gradually slower during the 40-min task. Although delayed presentation of display after head movement noticeably worsened both visual responses, presentation delay after hand movement did not significantly change the sluggishness of responses. Therefore it is suggested that decreasing time delay after head movement is a more important factor in order to improve human performance of handling tasks within the HMD environment.

  13. Life test results of OLED-XL long-life devices for use in active matrix organic light emitting diode (AMOLED) displays for head mounted applications

    Science.gov (United States)

    Fellowes, David A.; Wood, Michael V.; Hastings, Arthur R., Jr.; Ghosh, Amalkumar P.; Prache, Olivier

    2007-04-01

    eMagin Corporation has recently developed long-life OLED-XL devices for use in their AMOLED microdisplays for head-worn applications. AMOLED displays have been known to exhibit high levels of performance with regards to contrast, response time, uniformity, and viewing angle, but a lifetime improvement has been perceived to be essential for broadening the applications of OLED's in the military and in the commercial market. The new OLED-XL devices gave the promise of improvements in usable lifetime over 6X what the standard full color, white, and green devices could provide. The US Army's RDECOM CERDEC NVESD performed life tests on several standard and OLED-XL panels from eMagin under a Cooperative Research and Development Agreement (CRADA). Displays were tested at room temperature, utilizing eMagin's Design Reference Kit driver, allowing computer controlled optimization, brightness adjustment, and manual temperature compensation. The OLED Usable Lifetime Model, developed under a previous NVESD/eMagin SPIE paper presented at DSS 2005, has been adjusted based on the findings of these tests. The result is a better understanding of the applicability of AMOLEDs in military and commercial head mounted systems: where good fits are made, and where further development might be needed.

  14. Assessment of visual space recognition in patients with visual field defects using head mounted display (HMD) system: case study with severe visual field defect.

    Science.gov (United States)

    Sugihara, Shunichi; Tanaka, Toshiaki; Miyasaka, Tomoya; Izumi, Takashi; Shimizu, Koichi

    2013-01-01

    For the quantitative assessment of visual field defects of cerebrovascular patients, we developed a new measurement system that could present various kinds of visual information to the patient. In this system, we use a head mounted display as the display device. The quantitative assessment becomes possible by adding the capability to measure the eye movement and the head movement simultaneously by means of a video apparatus of motion analysis. In our study, we examined the effectiveness of this system by applying it to a patient with serious visual field defects. The visual image of the reduced test paper was presented to the patient, the effect on his/her spatial recognition and eye movement was investigated. The results indicated the increase in the ration of visual search in the reduced side. With the reduced image, the decrease of the angular velocity of eye movement was recognized in the visual search in the defected side. In the motion analysis, the head movement was not observed while the eye movements appeared corresponding to each different conditions. This fact led us to confirm that the patient coped with this kind of test by the eye movement. In this analysis, the effectiveness and the usefulness of the developed system were confirmed that enables us to evaluate the abnormal and compensation movement of the eyes.

  15. Assessment of visual space recognition of patients with unilateral spatial neglect and visual field defects using a head mounted display system.

    Science.gov (United States)

    Sugihara, Shunichi; Tanaka, Toshiaki; Miyasaka, Tomoya; Izumi, Takashi; Shimizu, Koichi

    2016-01-01

    [Purpose] The purpose of this study was the development of a method for presenting diverse visual information and assessing visual space recognition using a new head mounted display (HMD) system. [Subjects] Eight patients: four with unilateral spatial neglect (USN) and four with visual field defects (VFD). [Methods] A test sheet was placed on a desk, and its image was projected on the display of the HMD. Then, space recognition assessment was conducted using a cancellation test and motion analysis of the eyeballs and head under four conditions with images reduced in size and shifted. [Results] Leftward visual search was dominant in VFD patients, while rightward visual search was dominant in USN patients. The angular velocity of leftward eye movement during visual search of the right sheet decreased in both patient types. Motion analysis revealed a tendency of VFD patients to rotate the head in the affected direction under the left reduction condition, whereas USN patients rotated it in the opposite direction of the neglect. [Conclusion] A new HMD system was developed for presenting diverse visual information and assessing visual space recognition which identified the differences in the disturbance of visual space recognition of VFD and USN patients were indicated.

  16. Monocular 3D display system for presenting correct depth

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-10-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  17. Using a three-dimension head mounted displayer in audio-visual sexual stimulation aids in differential diagnosis of psychogenic from organic erectile dysfunction.

    Science.gov (United States)

    Moon, K-H; Song, P-H; Park, T-C

    2005-01-01

    We designed this study to compare the efficacy of using a three-dimension head mounted displayer (3-D HMD) and a conventional monitor in audio-visual sexual stimulation (AVSS) in differential diagnosis of psychogenic from organic erectile dysfunction (ED). Three groups of subjects such as psychogenic ED, organic ED, and healthy control received the evaluation. The change of penile tumescence in AVSS was monitored with Nocturnal Electrobioimpedance Volumetric Assessment and sexual arousal after AVSS was assessed by a simple question as being good, fair, or poor. Both the group of healthy control and psychogenic ED demonstrated a significantly higher rate of normal response in penile tumescence (P<0.05) and a significantly higher level of sexual arousal (P<0.05) if stimulated with 3-D HMD than conventional monitor. In the group of organic ED, even using 3-D HMD in AVSS could not give rise to a better response in both assessments. Therefore, we conclude that using a 3-D HMD in AVSS helps more to differentiate psychogenic from organic ED than a conventional monitor in AVSS.

  18. Light-weight monocular display unit for 3D display using polypyrrole film actuator

    Science.gov (United States)

    Sakamoto, Kunio; Ohmori, Koji

    2010-10-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator system. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a polypyrrole linear actuator.

  19. The head-mounted microscope.

    Science.gov (United States)

    Chen, Ting; Dailey, Seth H; Naze, Sawyer A; Jiang, Jack J

    2012-04-01

    Microsurgical equipment has greatly advanced since the inception of the microscope into the operating room. These advancements have allowed for superior surgical precision and better post-operative results. This study focuses on the use of the Leica HM500 head-mounted microscope for the operating phonosurgeon. The head-mounted microscope has an optical zoom from 2× to 9× and provides a working distance from 300 mm to 700 mm. The headpiece, with its articulated eyepieces, adjusts easily to head shape and circumference, and offers a focus function, which is either automatic or manually controlled. We performed five microlaryngoscopic operations utilizing the head-mounted microscope with successful results. By creating a more ergonomically favorable operating posture, a surgeon may be able to obtain greater precision and success in phonomicrosurgery. Phonomicrosurgery requires the precise manipulation of long-handled cantilevered instruments through the narrow bore of a laryngoscope. The head-mounted microscope shortens the working distance compared with a stand microscope, thereby increasing arm stability, which may improve surgical precision. Also, the head-mounted design permits flexibility in head position, enabling operator comfort, and delaying musculoskeletal fatigue. A head-mounted microscope decreases the working distance and provides better ergonomics in laryngoscopic microsurgery. These advances provide the potential to promote precision in phonomicrosurgery. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.

  20. Monocular accommodation condition in 3D display types through geometrical optics

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong-Wook; Park, Min-Chul; Son, Jung-Young

    2007-09-01

    Eye fatigue or strain phenomenon in 3D display environment is a significant problem for 3D display commercialization. The 3D display systems like eyeglasses type stereoscopic or auto-stereoscopic multiview, Super Multi-View (SMV), and Multi-Focus (MF) displays are considered for detail calculation about satisfaction level of monocular accommodation by geometrical optics calculation means. A lens with fixed focal length is used for experimental verification about numerical calculation of monocular defocus effect caused by accommodation at three different depths. And the simulation and experiment results consistently show relatively high level satisfaction about monocular accommodation at MF display condition. Additionally, possibility of monocular depth perception, 3D effect, at monocular MF display is discussed.

  1. Visual Field Testing with Head-Mounted Perimeter ‘imo’

    Science.gov (United States)

    Matsumoto, Chota; Yamao, Sayaka; Nomoto, Hiroki; Takada, Sonoko; Okuyama, Sachiko; Kimura, Shinji; Yamanaka, Kenzo; Aihara, Makoto; Shimomura, Yoshikazu

    2016-01-01

    Purpose We developed a new portable head-mounted perimeter, “imo”, which performs visual field (VF) testing under flexible conditions without a dark room. Besides the monocular eye test, imo can present a test target randomly to either eye without occlusion (a binocular random single eye test). The performance of imo was evaluated. Methods Using full HD transmissive LCD and high intensity LED backlights, imo can display a test target under the same test conditions as the Humphrey Field Analyzer (HFA). The monocular and binocular random single eye tests by imo and the HFA test were performed on 40 eyes of 20 subjects with glaucoma. VF sensitivity results by the monocular and binocular random single eye tests were compared, and these test results were further compared to those by the HFA. The subjects were asked whether they noticed which eye was being tested during the test. Results The mean sensitivity (MS) obtained with the HFA highly correlated with the MS by the imo monocular test (R: r = 0.96, L: r = 0.94, P < 0.001) and the binocular random single eye test (R: r = 0.97, L: r = 0.98, P < 0.001). The MS values by the monocular and binocular random single eye tests also highly correlated (R: r = 0.96, L: r = 0.95, P < 0.001). No subject could detect which eye was being tested during the examination. Conclusions The perimeter imo can obtain VF sensitivity highly compatible to that by the standard automated perimeter. The binocular random single eye test provides a non-occlusion test condition without the examinee being aware of the tested eye. PMID:27564382

  2. Monocular 3D display unit using soft actuator for parallax image shift

    Science.gov (United States)

    Sakamoto, Kunio; Kodama, Yuuki

    2010-11-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator system. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a soft linear actuator made of a polypyrrole film.

  3. Development of monocular and binocular multi-focus 3D display systems using LEDs

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong-Wook; Son, Jung-Young; Kwon, Yong-Moo

    2008-04-01

    Multi-focus 3D display systems are developed and a possibility about satisfaction of eye accommodation is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving the multi-focus function, we developed 3D display systems for one eye and both eyes, which can satisfy accommodation to displayed virtual objects within defined depth. The monocular accommodation and the binocular convergence 3D effect of the system are tested and a proof of the satisfaction of the accommodation and experimental result of the binocular 3D fusion are given as results by using the proposed 3D display systems.

  4. Monocular display unit for 3D display with correct depth perception

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  5. Hand Pose Estimation in Head-mounted Display Virtual Environments%头盔式虚拟环境中的人手姿态参数估计方法

    Institute of Scientific and Technical Information of China (English)

    周来; 顾宏斌; 汤勇

    2011-01-01

    针对头盔式虚拟环境中的人机交互需求,利用敏感哈希索引把人手姿态跟踪问题转化为人手姿态数据检索问题,提出一种人手姿态参数估计方法.该方法通过提取手势图像的梯度方向直方图作为索引特征,利用改进的敏感哈希索引对姿态数据库进行检索,并将得到的索引结果进行时序一致加权,实现对无标记的手姿态参数(包括手指关节参数和手腕姿态参数)的估计.实验结果表明,文中方法能应用于实际的虚拟现实座舱系统,所得姿态参数对人手自遮挡存在一定的鲁棒性.%On the demand of hand interaction in the head-mounted display environments of virtual reality systems, a novel method of hand pose estimation is proposed in this paper. The proposed method converts hand tracking problem into a hand pose indexing problem through locality sensitive hashing (LSH). In this paper, the histogram of oriented gradients (HOG) is served as the indexing feature,which could be indexed by an improved multi-probe scheme with high efficiency. As the results of indexing are weighted by temporal consistency, the hand pose configuration including finger joints and wrist without markers could be estimated. Our experiments of applying the proposed method on a virtual cockpit system verify the its practicability and demonstrate its robustness on the hand self-occlusion problem.

  6. 基于视网膜扫描的头戴显示器研究现状%Current progress in head-mounted display based on retinal scanning

    Institute of Scientific and Technical Information of China (English)

    呼新荣; 刘英; 王健; 李淳; 孙强; 李晶; 刘兵

    2014-01-01

    Recent years, along with the development of head mounted display (HMD) in lightweight and miniaturization, a new style of HMD which is based on retinal scanning is gradually becoming a research hotspot in both fields of virtual reality and helmet mounted display. With a unique scanning device to control the coherent beam generated by the laser diode (LD) and scan the beam in two dimensions to produce an image, retinal scanning display (RSD) can directly scan an image on the observer′s retina, which has the advantages of large field of view, high brightness and compact structure. Based on the RSD′s research reports and results of foreign countries, the basic principle and technological developments of RSD were briefly summarized. Meanwhile, the current progress and key technology in this field were emphasized. At last, a brief outlook about the future development trends and application prospect of RSD was discussed.%随着头戴显示器的轻小型化发展,基于视网膜扫描的头戴显示器逐渐成为近年来虚拟现实领域和头戴显示器领域的一个研究热点。此类显示器通过扫描装置控制激光束进行二维扫描,扫描图像经成像后可直接在观察者的视网膜上进行显示,具有大视场、高亮度、结构紧凑等独特优势,也被称为视网膜扫描显示器。鉴于国内该方向的研究较为薄弱,结合国外视网膜扫描显示器的研究基础,阐述了视网膜扫描显示器的工作原理,论述了该领域的技术发展及关键技术研究现状,总结了视网膜扫描显示器的技术发展趋势和应用前景,为国内相关领域的研究和发展指出了方向。

  7. Comparative evaluation of monocular augmented-reality display for surgical microscopes.

    Science.gov (United States)

    Rodriguez Palma, Santiago; Becker, Brian C; Lobes, Louis A; Riviere, Cameron N

    2012-01-01

    Medical augmented reality has undergone much development recently. However, there is a lack of studies quantitatively comparing the different display options available. This paper compares the effects of different graphical overlay systems in a simple micromanipulation task with "soft" visual servoing. We compared positioning accuracy in a real-time visually-guided task using Micron, an active handheld tremor-canceling microsurgical instrument, using three different displays: 2D screen, 3D screen, and microscope with monocular image injection. Tested with novices and an experienced vitreoretinal surgeon, display of virtual cues in the microscope via an augmented reality injection system significantly decreased 3D error (p < 0.05) compared to the 2D and 3D monitors when confounding factors such as magnification level were normalized.

  8. Avoiding monocular artifacts in clinical stereotests presented on column-interleaved digital stereoscopic displays.

    Science.gov (United States)

    Serrano-Pedraza, Ignacio; Vancleef, Kathleen; Read, Jenny C A

    2016-11-01

    New forms of stereoscopic 3-D technology offer vision scientists new opportunities for research, but also come with distinct problems. Here we consider autostereo displays where the two eyes' images are spatially interleaved in alternating columns of pixels and no glasses or special optics are required. Column-interleaved displays produce an excellent stereoscopic effect, but subtle changes in the angle of view can increase cross talk or even interchange the left and right eyes' images. This creates several challenges to the presentation of cyclopean stereograms (containing structure which is only detectable by binocular vision). We discuss the potential artifacts, including one that is unique to column-interleaved displays, whereby scene elements such as dots in a random-dot stereogram appear wider or narrower depending on the sign of their disparity. We derive an algorithm for creating stimuli which are free from this artifact. We show that this and other artifacts can be avoided by (a) using a task which is robust to disparity-sign inversion-for example, a disparity-detection rather than discrimination task-(b) using our proposed algorithm to ensure that parallax is applied symmetrically on the column-interleaved display, and (c) using a dynamic stimulus to avoid monocular artifacts from motion parallax. In order to test our recommendations, we performed two experiments using a stereoacuity task implemented with a parallax-barrier tablet. Our results confirm that these recommendations eliminate the artifacts. We believe that these recommendations will be useful to vision scientists interested in running stereo psychophysics experiments using parallax-barrier and other column-interleaved digital displays.

  9. Modeling Of A Monocular, Full-Color, Laser-Scanning, Helmet-Mounted Display for Aviator Situational Awareness

    Science.gov (United States)

    2017-03-27

    USAARL Report No. 2017-10 Modeling of a Monocular, Full-Color, Laser- Scanning, Helmet-Mounted Display for Aviator Situational Awareness By Thomas...Mounted Display for Aviator Situational Awareness N/A N/A N/A N/A N/A N/A Harding, Thomas H. Raatz, Maria E. Martin, John S. Rash, Clarence E. U.S...Huntsville, AL 35806-3302 PM Air Warrior, PEO Soldier Approved for public release; distribution unlimited. The modeling data and analysis presented in

  10. Head-Mounted Sensory Augmentation Device: Designing a Tactile Language.

    Science.gov (United States)

    Kerdegari, Hamideh; Kim, Yeongmi; Prescott, Tony J

    2016-01-01

    Sensory augmentation operates by synthesizing new information then displaying it through an existing sensory channel and can be used to help people with impaired sensing or to assist in tasks where sensory information is limited or sparse, for example, when navigating in a low visibility environment. This paper presents the design of a 2nd generation head-mounted vibrotactile interface as a sensory augmentation prototype designed to present navigation commands that are intuitive, informative, and minimize information overload. We describe an experiment in a structured environment in which the user navigates along a virtual wall whilst the position and orientation of the user's head is tracked in real time by a motion capture system. Navigation commands in the form of vibrotactile feedback are presented according to the user's distance from the virtual wall and their head orientation. We test the four possible combinations of two command presentation modes (continuous, discrete) and two command types (recurring, single). We evaluated the effectiveness of this 'tactile language' according to the users' walking speed and the smoothness of their trajectory parallel to the virtual wall. Results showed that recurring continuous commands allowed users to navigate with lowest route deviation and highest walking speed. In addition, subjects preferred recurring continuous commands over other commands.

  11. Exploring Virtual Worlds with Head-Mounted Displays

    Science.gov (United States)

    1989-02-01

    horizontally and vertically with a common binocular field of up to 90*. One complication of this system is that some predistortion is required to...wide angle optical systems as the LEEP system provide extremely wide fields of view. The tradeoff is that images must be predistorted to compensate...image on the real world, for the view of the real world must also be predistorted to achieve correct results. Our next ge:-tion unit, currently being

  12. Light Field Rendering for Head Mounted Displays using Pixel Reprojection

    DEFF Research Database (Denmark)

    Hansen, Anne Juhler; Kraus, Martin; Klein, Jákup

    2017-01-01

    of the information of the different images is redundant, we use pixel reprojection from the corner cameras to compute the remaining images in the light field. We compare the reprojected images with directly rendered images in a user test. In most cases, the users were unable to distinguish the images. In extreme...

  13. Using a Head-Mounted Camera to Infer Attention Direction

    Science.gov (United States)

    Schmitow, Clara; Stenberg, Gunilla; Billard, Aude; von Hofsten, Claes

    2013-01-01

    A head-mounted camera was used to measure head direction. The camera was mounted to the forehead of 20 6- and 20 12-month-old infants while they watched an object held at 11 horizontal (-80° to + 80°) and 9 vertical (-48° to + 50°) positions. The results showed that the head always moved less than required to be on target. Below 30° in the…

  14. Inexpensive Monocular Pico-Projector-based Augmented Reality Display for Surgical Microscope.

    Science.gov (United States)

    Shi, Chen; Becker, Brian C; Riviere, Cameron N

    2012-01-01

    This paper describes an inexpensive pico-projector-based augmented reality (AR) display for a surgical microscope. The system is designed for use with Micron, an active handheld surgical tool that cancels hand tremor of surgeons to improve microsurgical accuracy. Using the AR display, virtual cues can be injected into the microscope view to track the movement of the tip of Micron, show the desired position, and indicate the position error. Cues can be used to maintain high performance by helping the surgeon to avoid drifting out of the workspace of the instrument. Also, boundary information such as the view range of the cameras that record surgical procedures can be displayed to tell surgeons the operation area. Furthermore, numerical, textual, or graphical information can be displayed, showing such things as tool tip depth in the work space and on/off status of the canceling function of Micron.

  15. The effect of a monocular helmet-mounted display on aircrew health: a 10-year prospective cohort study of Apache AH MK 1 pilots: study midpoint update

    Science.gov (United States)

    Hiatt, Keith L.; Rash, Clarence E.; Watters, Raymond W.; Adams, Mark S.

    2009-05-01

    A collaborative occupational health study has been undertaken by Headquarters Army Aviation, Middle Wallop, UK, and the U.S. Army Aeromedical Research Laboratory, Fort Rucker, Alabama, to determine if the use of the Integrated Helmet and Display Sighting System (IHADSS) monocular helmet-mounted display (HMD) in the Apache AH Mk 1 attack helicopter has any long-term (10-year) effect on visual performance. The test methodology consists primarily of a detailed questionnaire and an annual battery of vision tests selected to capture changes in visual performance of Apache aviators over their flight career (with an emphasis on binocular visual function). Pilots using binocular night vision goggles serve as controls and undergo the same methodology. Currently, at the midpoint of the study, with the exception of a possible colour discrimination effect, there are no data indicating that the long-term use of the IHADSS monocular HMD results in negative effects on vision.

  16. Helmet-Mounted Displays (HMD)

    Data.gov (United States)

    Federal Laboratory Consortium — The Helmet-Mounted Display labis responsible for monocular HMD day display evaluations; monocular HMD night vision performance processes; binocular HMD day display...

  17. CUSUM analysis of learning curves for the head-mounted microscope in phonomicrosurgery.

    Science.gov (United States)

    Chen, Ting; Vamos, Andrew C; Dailey, Seth H; Jiang, Jack J

    2016-10-01

    To observe the learning curve of the head-mounted microscope in a phonomicrosurgery simulator using cumulative summation (CUSUM) analysis, which incorporates a magnetic phonomicrosurgery instrument tracking system (MPTS). Retrospective case series. Eight subjects (6 medical students and 2 surgeons inexperienced in phonomicrosurgery) operated on phonomicrosurgical simulation cutting tasks while using the head-mounted microscope for 400 minutes total. Two 20-minute sessions occurred each day for 10 total days, with operation quality (Qs ) and completion time (T) being recorded after each session. Cumulative summation analysis of Qs and T was performed by using subjects' performance data from trials completed using a traditional standing microscope as success criteria. The motion parameters from the head-mounted microscope were significantly better than the standing microscope (P microscope (P microscope, as assessed by CUSUM analysis. Cumulative summation analysis can objectively monitor the learning process associated with a phonomicrosurgical simulator system, ultimately providing a tool to assess learning status. Also, motion parameters determined by our MPTS showed that, although the head-mounted microscope provides better motion control, worse Qs and longer T resulted. This decrease in Qs is likely a result of the relatively unstable visual environment that it provides. Overall, the inexperienced surgeons participating in this study failed to adapt to the head-mounted microscope in our simulated phonomicrosurgery environment. 4 Laryngoscope, 126:2295-2300, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  18. Talking about the design of head-mounted eye tracker%浅谈头戴式眼动仪的设计

    Institute of Scientific and Technical Information of China (English)

    麦伟强

    2012-01-01

      This paper briefly describes the integrated head-mounted eye tracker for head-mounted display to guide the infrared light-emitting lighting, Bluetooth communication technology. And the positioning algorithm pupil and head-mounted eye control device to operate the watchful eye of the user through the eye of the establishment of a virtual interface, helping paraplegic, muscle atrophy, stroke and other persons with disabilities have great significance.%  本文简单的介绍集成了头戴式显示引导,红外发光照明,蓝牙通讯等技术的头戴式眼动仪。并利用瞳孔定位算法和头戴式眼睛操控装置,用户能通过眼球的注视对建立的虚拟界面进行操作,对于帮助高位截瘫、肌肉萎缩、中风等残疾人有重大意义。

  19. Contributions of Head-Mounted Cameras to Studying the Visual Environments of Infants and Young Children

    Science.gov (United States)

    Smith, Linda B.; Yu, Chen; Yoshida, Hanako; Fausey, Caitlin M.

    2015-01-01

    Head-mounted video cameras (with and without an eye camera to track gaze direction) are being increasingly used to study infants' and young children's visual environments and provide new and often unexpected insights about the visual world from a child's point of view. The challenge in using head cameras is principally conceptual and concerns the…

  20. Head-Mounted Eye Tracking: A New Method to Describe Infant Looking

    Science.gov (United States)

    Franchak, John M.; Kretch, Kari S.; Soska, Kasey C.; Adolph, Karen E.

    2011-01-01

    Despite hundreds of studies describing infants' visual exploration of experimental stimuli, researchers know little about where infants look during everyday interactions. The current study describes the first method for studying visual behavior during natural interactions in mobile infants. Six 14-month-old infants wore a head-mounted eye-tracker…

  1. Head mounted device for point-of-gaze estimation in three dimensions

    DEFF Research Database (Denmark)

    Lidegaard, Morten; Witzner Hansen, Dan; Krüger, Norbert

    2014-01-01

    This paper presents a fully calibrated extended geometric approach for gaze estimation in three dimensions (3D). The methodology is based on a geometric approach utilising a fully calibrated binocular setup constructed as a head-mounted system. The approach is based on utilisation of two ordinary...... web-cameras for each eye and 6D magnetic sensors allowing free head movements in 3D. Evaluation of initial experiments indicate comparable results to current state-of-the-art on estimating gaze in 3D. Initial results show an RMS error of 39-50 mm in the depth dimension and even smaller...... in the horizontal and vertical dimensions regarding fixations. However, even though the workspace is limited, the fact that the system is designed as a head-mounted device, the workspace volume is relatively positioned to the pose of the device. Hence gaze can be estimated in 3D with relatively free head...

  2. Design of refractive/diffractive objective for head-mounted night vision goggle

    Science.gov (United States)

    Zhao, Qiu-Ling; Wang, Zhao-Qi; Fu, Ru-Lian; Sun, Qiang; Lu, Zhen-Wu

    A refractive/diffractive objective for head-mounted night vision goggle was designed. This objective consists of six elements, including one binary surface and two hyperboloids. It has a 40[degree sign] field of view, a 1.25 f-number, and a 18 mm image diameter, with a compact structure and a light weight. All optical specifications reach proposed designing targets. Besides, we considered fabrication issues about special surfaces of the system.

  3. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    Science.gov (United States)

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  4. A Computational Model for the Stereoscopic Optics of a Head-Mounted Display

    Science.gov (United States)

    1991-02-01

    Non-linear field distortion causes straight lines on the screen to appear curved. This can be corrected for in the graphics system by predistorting the...the peripheral field will be positioned wrong. The only way to avoid this unpleasant choice is to predistort the image to correct the optical...transformations in the pipeline are linear. It is tempting to run only the polygon vertices through the predistortion function and let the very

  5. An Evaluation of Signal Annoyance for a Head-Mounted Tactile Display

    Science.gov (United States)

    2015-03-01

    on need. The benefit of transferring information from the visual and auditory sensory channels to the tactile channel is reduced stress on the...frequency signals (45 and 160 Hz) were included and motivated by 2 of our previous studies regarding the audible frequency discrimination of vibrotactile...ratings due to signal frequency, using  = 0.05. Post hoc-paired comparisons were evaluated using Tukey’s HSD (honestly significant difference) test . We

  6. Virtual Reality Design: How Head-Mounted Displays Change Design Paradigms of Virtual Reality Worlds

    Directory of Open Access Journals (Sweden)

    Christian Stein

    2016-09-01

    Full Text Available With the upcoming generation of virtual reality HMDs, new virtual worlds, scenarios, and games are created especially for them. These are no longer bound to a remote screen or a relatively static user, but to an HMD as a more immersive device. This article discusses requirements for virtual scenarios implemented in new-generation HMDs to achieve a comfortable user experience. Furthermore, the effects of positional tracking are introduced and the relation between the user’s virtual and physical body is analyzed. The observations made are exemplified by existing software prototypes. They indicate how the term “virtual reality,” with all its loaded connotations, may be reconceptualized to express the peculiarities of HMDs in the context of gaming, entertainment, and virtual experiences.

  7. Visual strain : a comparison of monitors and head-mounted displays

    NARCIS (Netherlands)

    Kooi, F.L.

    1997-01-01

    New methods to measure information uptake and eye strain have been developed. The speed of information uptake is measured with a reading task that demands quick and accurate eyemovements. Accommodative facility is shown to be a good measure for eye strain. A standard monitor and three types of HMDs

  8. Effect of the Oculus Rift head mounted display on postural stability

    DEFF Research Database (Denmark)

    Epure, Paula; Gheorghe, Cristina; Nissen, Thomas

    2014-01-01

    This study explored how a HMD-experienced virtual environment influences physical balance of six balance-impaired adults 59-69 years-of-age, when compared to a control group of eight non-balance-impaired adults, 18-28 years-of-age. The setup included a Microsoft Kinect and a self-created balance ...

  9. Perceptual Issues in the Use of Head-Mounted Visual Displays

    Science.gov (United States)

    2006-01-01

    Y., & Roe, A. W. (1995). Functional compartments in visual cortex: Segregation and interaction. In M. S. Gazzaniga (Ed.), The cognitive neurosciences...DeYoe, E. A. (1995). Concurrent processing in the primate visual cortex. In M. S. Gazzaniga (Ed.), The cognitive neu- rosciences (pp. 383–400

  10. Development and Application of a Wireless, Networked Raspberry Pi Controlled Head Mounted Tactile Display (HMTD)

    Science.gov (United States)

    2016-09-01

    or wireless (Wi-Fi USB adapter) setup. For wired setup (Ethernet cable needed), connect the RPi to the router through the Ethernet LAN port. Wi-Fi...in an outdoor environment where a router or access point is not available, we needed to implement RPi in a wireless –ad hoc mode. The advantage of...ARL-TN-0796 ● SEP 2016 US Army Research Laboratory Development and Application of a Wireless , Networked Raspberry Pi-Controlled

  11. Game-Based Evacuation Drill Using Augmented Reality and Head-Mounted Display

    Science.gov (United States)

    Kawai, Junya; Mitsuhara, Hiroyuki; Shishibori, Masami

    2016-01-01

    Purpose: Evacuation drills should be more realistic and interactive. Focusing on situational and audio-visual realities and scenario-based interactivity, the authors have developed a game-based evacuation drill (GBED) system that presents augmented reality (AR) materials on tablet computers. The paper's current research purpose is to improve…

  12. Effect of the Oculus Rift head mounted display on postural stability

    DEFF Research Database (Denmark)

    Epure, Paula; Gheorghe, Cristina; Nissen, Thomas

    2014-01-01

    This study explored how a HMD-experienced virtual environment influences physical balance of six balance-impaired adults 59-69 years-of-age, when compared to a control group of eight non-balance-impaired adults, 18-28 years-of-age. The setup included a Microsoft Kinect and a self-created balance...

  13. Wearable and augmented reality displays using MEMS and SLMs

    OpenAIRE

    Ürey, Hakan; Ulusoy, Erdem; Akşit, Kaan; Hossein, Amir; Niaki, Ghanbari

    2016-01-01

    In this talk, we present the various types of 3D displays, head-mounted projection displays and wearable displays developed in our group using MEMS scanners, compact RGB laser light sources, and spatial light modulators.

  14. Wearable and augmented reality displays using MEMS and SLMs

    Science.gov (United States)

    Urey, Hakan; Ulusoy, Erdem; Kazempourradi, Seyedmahdi M. K.; Mengu, Deniz; Olcer, Selim; Holmstrom, Sven T.

    2016-03-01

    In this talk, we present the various types of 3D displays, head-mounted projection displays and wearable displays developed in our group using MEMS scanners, compact RGB laser light sources, and spatial light modulators.

  15. Head-mounted LED for optogenetic experiments of freely-behaving animal

    Science.gov (United States)

    Kwon, Ki Yong; Gnade, Andrew G.; Rush, Alexander D.; Patten, Craig D.

    2016-03-01

    Recent developments in optogenetics have demonstrated the ability to target specific types of neurons with sub-millisecond temporal precision via direct optical stimulation of genetically modified neurons in the brain. In most applications, the beam of a laser is coupled to an optical fiber, which guides and delivers the optical power to the region of interest. Light emitting diodes (LEDs) are an alternative light source for optogenetics and they provide many advantages over a laser based system including cost, size, illumination stability, and fast modulation. Their compact size and low power consumption make LEDs suitable light sources for a wireless optogenetic stimulation system. However, the coupling efficiency of an LED's output light into an optical fiber is lower than a laser due to its noncollimated output light. In typical chronic optogenetic experiment, the output of the light source is transmitted to the brain through a patch cable and a fiber stub implant, and this configuration requires two fiber-to-fiber couplings. Attenuation within the patch cable is potential source of optical power loss. In this study, we report and characterize a recently developed light delivery method for freely-behaving animal experiments. We have developed a head-mounted light source that maximizes the coupling efficiency of an LED light source by eliminating the need for a fiber optic cable. This miniaturized LED is designed to couple directly to the fiber stub implant. Depending on the desired optical power output, the head-mounted LED can be controlled by either a tethered (high power) or battery-powered wireless (moderate power) controller. In the tethered system, the LED is controlled through 40 gauge micro coaxial cable which is thinner, more flexible, and more durable than a fiber optic cable. The battery-powered wireless system uses either infrared or radio frequency transmission to achieve real-time control. Optical, electrical, mechanical, and thermal

  16. The Effect of a Monocular Helmet-Mounted Display on Aircrew Health: A Cohort Study of Apache AH Mk 1 Pilots Four-Year Review

    Science.gov (United States)

    2009-12-01

    conventional Snellen charts (Bailey and Lovie, 1976). This test was conducted monocularly for both left and right eyes using the habitual correction...from logMAR to Snellen acuity (20/xx) is accomplished using the formula to determine the Snellen denominator: xx = 20 x 10 logMAR. For the...last measurement cycle, values were available for all 23 control subjects. For the right eye, the mean visual acuity was 0.08 logMAR ( Snellen

  17. Head mounted DMD based projection system for natural and prosthetic visual stimulation in freely moving rats

    Science.gov (United States)

    Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi

    2016-10-01

    Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal’s retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals.

  18. Visuomotor adaptation in head-mounted virtual reality versus conventional training

    Science.gov (United States)

    Anglin, J. M.; Sugiyama, T.; Liew, S.-L.

    2017-01-01

    Immersive, head-mounted virtual reality (HMD-VR) provides a unique opportunity to understand how changes in sensory environments affect motor learning. However, potential differences in mechanisms of motor learning and adaptation in HMD-VR versus a conventional training (CT) environment have not been extensively explored. Here, we investigated whether adaptation on a visuomotor rotation task in HMD-VR yields similar adaptation effects in CT and whether these effects are achieved through similar mechanisms. Specifically, recent work has shown that visuomotor adaptation may occur via both an implicit, error-based internal model and a more cognitive, explicit strategic component. We sought to measure both overall adaptation and balance between implicit and explicit mechanisms in HMD-VR versus CT. Twenty-four healthy individuals were placed in either HMD-VR or CT and trained on an identical visuomotor adaptation task that measured both implicit and explicit components. Our results showed that the overall timecourse of adaption was similar in both HMD-VR and CT. However, HMD-VR participants utilized a greater cognitive strategy than CT, while CT participants engaged in greater implicit learning. These results suggest that while both conditions produce similar results in overall adaptation, the mechanisms by which visuomotor adaption occurs in HMD-VR appear to be more reliant on cognitive strategies. PMID:28374808

  19. Head mounted DMD based projection system for natural and prosthetic visual stimulation in freely moving rats

    Science.gov (United States)

    Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi

    2016-01-01

    Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal’s retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals. PMID:27731346

  20. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    Science.gov (United States)

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro(®) 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro(®) and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro(®) and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro(®) to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro(®) 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  1. Monocular transparency generates quantitative depth.

    Science.gov (United States)

    Howard, Ian P; Duke, Philip A

    2003-11-01

    Monocular zones adjacent to depth steps can create an impression of depth in the absence of binocular disparity. However, the magnitude of depth is not specified. We designed a stereogram that provides information about depth magnitude but which has no disparity. The effect depends on transparency rather than occlusion. For most subjects, depth magnitude produced by monocular transparency was similar to that created by a disparity-defined depth probe. Addition of disparity to monocular transparency did not improve the accuracy of depth settings. The magnitude of depth created by monocular occlusion fell short of that created by monocular transparency.

  2. Evaluation of Head Mounted and Head Down Information Displays During Simulated Mine-Countermeasures Dives to 42 msw

    Science.gov (United States)

    2008-04-01

    information processing is slowed at each stage by narcosis ( nitrogen and carbon dioxide). It is likely that transferring information from sensory memory to...short term memory. Background noise is likely a factor in distracting the diver from attending to other, important information. Pressure ( narcosis ...Immersion in water increases the ambient pressure of the diving environment and may lead to narcosis . According to Fowler et al., (1985

  3. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Directory of Open Access Journals (Sweden)

    Akshay Gopinathan Nair

    2015-01-01

    Full Text Available Objective: To study the utility of a commercially available small, portable ultra-high definition (HD camera (GoPro Hero 4 for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon′s head. Due care was taken to protect the patient′s identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each. The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon′s head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  4. Studying complex decision making in natural settings: using a head-mounted video camera to study competitive orienteering.

    Science.gov (United States)

    Omodei, M M; McLennan, J

    1994-12-01

    Head-mounted video recording is described as a potentially powerful method for studying decision making in natural settings. Most alternative data-collection procedures are intrusive and disruptive of the decision-making processes involved while conventional video-recording procedures are either impractical or impossible. As a severe test of the robustness of the methodology we studied the decision making of 6 experienced orienteers who carried a head-mounted light-weight video camera as they navigated, running as fast as possible, around a set of control points in a forest. Use of the Wilcoxon matched-pairs signed-ranks test indicated that compared with free recall, video-assisted recall evoked (a) significantly greater experiential immersion in the recall, (b) significantly more specific recollections of navigation-related thoughts and feelings, (c) significantly more realizations of map and terrain features and aspects of running speed which were not noticed at the time of actual competition, and (d) significantly greater insight into specific navigational errors and the intrusion of distracting thoughts into the decision-making process. Potential applications of the technique in (a) the environments of emergency services, (b) therapeutic contexts, (c) education and training, and (d) sports psychology are discussed.

  5. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Science.gov (United States)

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Rocca, David Della; Rocca, Robert C Della; Andron, Aleza; Jain, Vandana

    2015-01-01

    Objective: To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery. PMID:26655001

  6. SVGA and XGA active matrix microdisplays for head-mounted applications

    Science.gov (United States)

    Alvelda, Phillip; Bolotski, Michael; Brown, Imani L.

    2000-03-01

    The MicroDisplay Corporation's liquid crystal on silicon (LCOS) display devices are based on the union of several technologies with the extreme integration capability of conventionally fabricated CMOS substrates. The fast liquid crystal operation modes and new scalable high-performance pixel addressing architectures presented in this paper enable substantially improved color, contrast, and brightness while still satisfying the optical, packaging, and power requirements of portable applications. The entire suite of MicroDisplay's technologies was devised to create a line of mixed-signal application-specific integrated circuits (ASICs) in single-chip display systems. Mixed-signal circuits can integrate computing, memory, and communication circuitry on the same substrate as the display drivers and pixel array for a multifunctional complete system-on-a-chip. System-on-a-chip benefits also include reduced head supported weight requirements through the elimination of off-chip drive electronics.

  7. Monocular depth effects on perceptual fading.

    Science.gov (United States)

    Hsu, Li-Chuan; Kramer, Peter; Yeh, Su-Ling

    2010-08-06

    After prolonged viewing, a static target among moving non-targets is perceived to repeatedly disappear and reappear. An uncrossed stereoscopic disparity of the target facilitates this Motion-Induced Blindness (MIB). Here we test whether monocular depth cues can affect MIB too, and whether they can also affect perceptual fading in static displays. Experiment 1 reveals an effect of interposition: more MIB when the target appears partially covered by, than when it appears to cover, its surroundings. Experiment 2 shows that the effect is indeed due to interposition and not to the target's contours. Experiment 3 induces depth with the watercolor illusion and replicates Experiment 1. Experiments 4 and 5 replicate Experiments 1 and 3 without the use of motion. Since almost any stimulus contains a monocular depth cue, we conclude that perceived depth affects perceptual fading in almost any stimulus, whether dynamic or static. Copyright 2010 Elsevier Ltd. All rights reserved.

  8. Effects of Head Mounted Devices on Head-Neck Dynamic Response to +GZ Accelerations

    Science.gov (United States)

    1989-04-01

    provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently...15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER 19a. NAME OF RESPONSIBLE PERSON a. REPORT b. ABSTRACT c. THIS PAGE...Schultz, A., Benson D. and Hirsch, C., "Force-Deformation Properties of Human Costo - Sternal and Costo -Vertebral Articulations", J. Biomechanics 7

  9. The effect of contrast on monocular versus binocular reading performance.

    Science.gov (United States)

    Johansson, Jan; Pansell, Tony; Ygge, Jan; Seimyr, Gustaf Öqvist

    2014-05-14

    The binocular advantage in reading performance is typically small. On the other hand research shows binocular reading to be remarkably robust to degraded stimulus properties. We hypothesized that this robustness may stem from an increasing binocular contribution. The main objective was to compare monocular and binocular performance at different stimulus contrasts and assess the level of binocular superiority. A secondary objective was to assess any asymmetry in performance related to ocular dominance. In a balanced repeated measures experiment 18 subjects read texts at three levels of contrast monocularly and binocularly while their eye movements were recorded. The binocular advantage increased with reduced contrast producing a 7% slower monocular reading at 40% contrast, 9% slower at 20% contrast, and 21% slower at 10% contrast. A statistically significant interaction effect was found in fixation duration displaying a more adverse effect in the monocular condition at lowest contrast. No significant effects of ocular dominance were observed. The outcome suggests that binocularity contributes increasingly to reading performance as stimulus contrast decreases. The strongest difference between monocular and binocular performance was due to fixation duration. The findings may pose a clinical point that it may be necessary to consider tests at different contrast levels when estimating reading performance. © 2014 ARVO.

  10. Chronic monitoring of cortical hemodynamics in behaving, freely-moving rats using a miniaturized head-mounted optical microscope

    Science.gov (United States)

    Sigal, Iliya; Gad, Raanan; Koletar, Margaret; Ringuette, Dene; Stefanovic, Bojana; Levi, Ofer

    2016-03-01

    Growing interest within the neurophysiology community in assessing healthy and pathological brain activity in animals that are awake and freely-behaving has triggered the need for optical systems that are suitable for such longitudinal studies. In this work we report label-free multi-modal imaging of cortical hemodynamics in the somatosensory cortex of awake, freely-behaving rats, using a novel head-mounted miniature optical microscope. The microscope employs vertical cavity surface emitting lasers (VCSELs) at three distinct wavelengths (680 nm, 795 nm, and 850 nm) to provide measurements of four hemodynamic markers: blood flow speeds, HbO, HbR, and total Hb concentration, across a > 2 mm field of view. Blood flow speeds are extracted using Laser Speckle Contrast Imaging (LSCI), while oxygenation measurements are performed using Intrinsic Optical Signal Imaging (IOSI). Longitudinal measurements on the same animal are made possible over the course of > 6 weeks using a chronic window that is surgically implanted into the skull. We use the device to examine changes in blood flow and blood oxygenation in superficial cortical blood vessels and tissue in response to drug-induced absence-like seizures, correlating motor behavior with changes in blood flow and blood oxygenation in the brain.

  11. Monocular visual ranging

    Science.gov (United States)

    Witus, Gary; Hunt, Shawn

    2008-04-01

    The vision system of a mobile robot for checkpoint and perimeter security inspection performs multiple functions: providing surveillance video, providing high resolution still images, and providing video for semi-autonomous visual navigation. Mid-priced commercial digital cameras support the primary inspection functions. Semi-autonomous visual navigation is a tertiary function whose purpose is to reduce the burden of teleoperation and free the security personnel for their primary functions. Approaches to robot visual navigation require some form of depth perception for speed control to prevent the robot from colliding with objects. In this paper present the initial results of an exploration of the capabilities and limitations of using a single monocular commercial digital camera for depth perception. Our approach combines complementary methods in alternating stationary and moving behaviors. When the platform is stationary, it computes a range image from differential blur in the image stack collected at multiple focus settings. When the robot is moving, it extracts an estimate of range from the camera auto-focus function, and combines this with an estimate derived from angular expansion of a constellation of visual tracking points.

  12. An externally head-mounted wireless neural recording device for laboratory animal research and possible human clinical use.

    Science.gov (United States)

    Yin, Ming; Li, Hao; Bull, Christopher; Borton, David A; Aceros, Juan; Larson, Lawrence; Nurmikko, Arto V

    2013-01-01

    In this paper we present a new type of head-mounted wireless neural recording device in a highly compact package, dedicated for untethered laboratory animal research and designed for future mobile human clinical use. The device, which takes its input from an array of intracortical microelectrode arrays (MEA) has ninety-seven broadband parallel neural recording channels and was integrated on to two custom designed printed circuit boards. These house several low power, custom integrated circuits, including a preamplifier ASIC, a controller ASIC, plus two SAR ADCs, a 3-axis accelerometer, a 48MHz clock source, and a Manchester encoder. Another ultralow power RF chip supports an OOK transmitter with the center frequency tunable from 3GHz to 4GHz, mounted on a separate low loss dielectric board together with a 3V LDO, with output fed to a UWB chip antenna. The IC boards were interconnected and packaged in a polyether ether ketone (PEEK) enclosure which is compatible with both animal and human use (e.g. sterilizable). The entire system consumes 17mA from a 1.2Ahr 3.6V Li-SOCl2 1/2AA battery, which operates the device for more than 2 days. The overall system includes a custom RF receiver electronics which are designed to directly interface with any number of commercial (or custom) neural signal processors for multi-channel broadband neural recording. Bench-top measurements and in vivo testing of the device in rhesus macaques are presented to demonstrate the performance of the wireless neural interface.

  13. Transparent 3D display for augmented reality

    Science.gov (United States)

    Lee, Byoungho; Hong, Jisoo

    2012-11-01

    Two types of transparent three-dimensional display systems applicable for the augmented reality are demonstrated. One of them is a head-mounted-display-type implementation which utilizes the principle of the system adopting the concave floating lens to the virtual mode integral imaging. Such configuration has an advantage in that the threedimensional image can be displayed at sufficiently far distance resolving the accommodation conflict with the real world scene. Incorporating the convex half mirror, which shows a partial transparency, instead of the concave floating lens, makes it possible to implement the transparent three-dimensional display system. The other type is the projection-type implementation, which is more appropriate for the general use than the head-mounted-display-type implementation. Its imaging principle is based on the well-known reflection-type integral imaging. We realize the feature of transparent display by imposing the partial transparency to the array of concave mirror which is used for the screen of reflection-type integral imaging. Two types of configurations, relying on incoherent and coherent light sources, are both possible. For the incoherent configuration, we introduce the concave half mirror array, whereas the coherent one adopts the holographic optical element which replicates the functionality of the lenslet array. Though the projection-type implementation is beneficial than the head-mounted-display in principle, the present status of the technical advance of the spatial light modulator still does not provide the satisfactory visual quality of the displayed three-dimensional image. Hence we expect that the head-mounted-display-type and projection-type implementations will come up in the market in sequence.

  14. Identification of Prey Captures in Australian Fur Seals (Arctocephalus pusillus doriferus Using Head-Mounted Accelerometers: Field Validation with Animal-Borne Video Cameras.

    Directory of Open Access Journals (Sweden)

    Beth L Volpov

    Full Text Available This study investigated prey captures in free-ranging adult female Australian fur seals (Arctocephalus pusillus doriferus using head-mounted 3-axis accelerometers and animal-borne video cameras. Acceleration data was used to identify individual attempted prey captures (APC, and video data were used to independently verify APC and prey types. Results demonstrated that head-mounted accelerometers could detect individual APC but were unable to distinguish among prey types (fish, cephalopod, stingray or between successful captures and unsuccessful capture attempts. Mean detection rate (true positive rate on individual animals in the testing subset ranged from 67-100%, and mean detection on the testing subset averaged across 4 animals ranged from 82-97%. Mean False positive (FP rate ranged from 15-67% individually in the testing subset, and 26-59% averaged across 4 animals. Surge and sway had significantly greater detection rates, but also conversely greater FP rates compared to heave. Video data also indicated that some head movements recorded by the accelerometers were unrelated to APC and that a peak in acceleration variance did not always equate to an individual prey item. The results of the present study indicate that head-mounted accelerometers provide a complementary tool for investigating foraging behaviour in pinnipeds, but that detection and FP correction factors need to be applied for reliable field application.

  15. Navigational Heads-Up Display: Will a Shipboard Augmented Electronic Navigation System Sink or Swim?

    Science.gov (United States)

    2015-03-01

    76 Table 6. Subject Video Game Practice...Interface GPS Global Positioning System HMD Head-Mounted Display HUD Heads-Up Display IP Internet Protocol IR Infrared LCD Liquid-Crystal Display...Stress Disorder SDK Software Development Kit SOG Speed over Ground SWOS Surface Warfare Officer School USD United States Dollar USNS United States

  16. Visual comfort of binocular and 3D displays

    NARCIS (Netherlands)

    Kooi, F.L.; Toet, A.

    2004-01-01

    Imperfections in binocular image pairs can cause serious viewing discomfort. For example, in stereo vision systems eye strain is caused by unintentional mismatches between the left and right eye images (stereo imperfections). Head-mounted displays can induce eye strain due to optical misalignments.

  17. Visual comfort of binocular and 3D displays

    NARCIS (Netherlands)

    Kooi, F.L.; Toet, A.

    2004-01-01

    Imperfections in binocular image pairs can cause serious viewing discomfort. For example, in stereo vision systems eye strain is caused by unintentional mismatches between the left and right eye images (stereo imperfections). Head-mounted displays can induce eye strain due to optical misalignments.

  18. The Ultimate Display

    CERN Document Server

    Fluke, C J

    2016-01-01

    Astronomical images and datasets are increasingly high-resolution and multi-dimensional. The vast majority of astronomers perform all of their visualisation and analysis tasks on low-resolution, two-dimensional desktop monitors. If there were no technological barriers to designing the ultimate stereoscopic display for astronomy, what would it look like? What capabilities would we require of our compute hardware to drive it? And are existing technologies even close to providing a true 3D experience that is compatible with the depth resolution of human stereoscopic vision? We consider the CAVE2 (an 80 Megapixel, hybrid 2D and 3D virtual reality environment directly integrated with a 100 Tflop/s GPU-powered supercomputer) and the Oculus Rift (a low- cost, head-mounted display) as examples at opposite financial ends of the immersive display spectrum.

  19. A preliminary study of clinical assessment of left unilateral spatial neglect using a head mounted display system (HMD in rehabilitation engineering technology

    Directory of Open Access Journals (Sweden)

    Ino Shuichi

    2005-10-01

    Full Text Available Abstract Purpose Unilateral spatial neglect (USN is a common syndrome in which a patient fails to report or respond to stimulation from the side of space opposite a brain lesion, where these symptoms are not due to primary sensory or motor deficits. The purpose of this study was to analyze an evaluation process system of USN in various visual fields using HMD in order to understand more accurately any faults of USN operating in the object-centred co-ordinates. Method Eight stroke patients participated in this study and they had Left USN in clinical test, and right hemisphere damage was checked by CT scan. Assessments of USN were performed the BIT common clinical test (the line and the stars cancellation tests and special tests the zoom-in condition (ZI condition and the zoom-out condition (ZO condition. The subjects were first evaluated by the common clinical test without HMD and then two spatial tests with HMD. Moreover, we used a video-recording for all tests to analyze each subject's movements. Results For the line cancellation test under the common condition, the mean percentage of the correct answers at the left side in the test paper was 94.4%. In the ZI condition, the left side was 61.8.% and the right side was 92.4.%. In the ZO condition, the left side was 79.9% and the right side was 91.7.%. There were significant differences among the three conditions. The results of the stars cancellation test also showed the same tendency as the line bisection test. Conclusion The results showed that the assessment of USN using a technique of HMD system may indicate the disability of USN more than the common clinical tests. Moreover, it might be hypothesized that the three dimensional for USN test may be more related to various damage and occurrence of USN than only the two dimensional test.

  20. Depth of Monocular Elements in a Binocular Scene: The Conditions for da Vinci Stereopsis

    Science.gov (United States)

    Cook, Michael; Gillam, Barbara

    2004-01-01

    Quantitative depth based on binocular resolution of visibility constraints is demonstrated in a novel stereogram representing an object, visible to 1 eye only, and seen through an aperture or camouflaged against a background. The monocular region in the display is attached to the binocular region, so that the stereogram represents an object which…

  1. Depth of Monocular Elements in a Binocular Scene: The Conditions for da Vinci Stereopsis

    Science.gov (United States)

    Cook, Michael; Gillam, Barbara

    2004-01-01

    Quantitative depth based on binocular resolution of visibility constraints is demonstrated in a novel stereogram representing an object, visible to 1 eye only, and seen through an aperture or camouflaged against a background. The monocular region in the display is attached to the binocular region, so that the stereogram represents an object which…

  2. Validation of Data Association for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-01-01

    Full Text Available Simultaneous Mapping and Localization (SLAM is a multidisciplinary problem with ramifications within several fields. One of the key aspects for its popularity and success is the data fusion produced by SLAM techniques, providing strong and robust sensory systems even with simple devices, such as webcams in Monocular SLAM. This work studies a novel batch validation algorithm, the highest order hypothesis compatibility test (HOHCT, against one of the most popular approaches, the JCCB. The HOHCT approach has been developed as a way to improve performance of the delayed inverse-depth initialization monocular SLAM, a previously developed monocular SLAM algorithm based on parallax estimation. Both HOHCT and JCCB are extensively tested and compared within a delayed inverse-depth initialization monocular SLAM framework, showing the strengths and costs of this proposal.

  3. Color speckle in laser displays

    Science.gov (United States)

    Kuroda, Kazuo

    2015-07-01

    At the beginning of this century, lighting technology has been shifted from discharge lamps, fluorescent lamps and electric bulbs to solid-state lighting. Current solid-state lighting is based on the light emitting diodes (LED) technology, but the laser lighting technology is developing rapidly, such as, laser cinema projectors, laser TVs, laser head-up displays, laser head mounted displays, and laser headlamps for motor vehicles. One of the main issues of laser displays is the reduction of speckle noise1). For the monochromatic laser light, speckle is random interference pattern on the image plane (retina for human observer). For laser displays, RGB (red-green-blue) lasers form speckle patterns independently, which results in random distribution of chromaticity, called color speckle2).

  4. Monocular indoor localization techniques for smartphones

    Directory of Open Access Journals (Sweden)

    Hollósi Gergely

    2016-12-01

    Full Text Available In the last decade huge research work has been put to the indoor visual localization of personal smartphones. Considering the available sensor capabilities monocular odometry provides promising solution, even reecting requirements of augmented reality applications. This paper is aimed to give an overview of state-of-the-art results regarding monocular visual localization. For this purpose essential basics of computer vision are presented and the most promising solutions are reviewed.

  5. Monocular Video Guided Garment Simulation

    Institute of Scientific and Technical Information of China (English)

    Fa-Ming Li; Xiao-Wu Chen∗; Bin Zhou; Fei-Xiang Lu; Kan Guo; Qiang Fu

    2015-01-01

    We present a prototype to generate a garment-shape sequence guided by a monocular video sequence. It is a combination of a physically-based simulation and a boundary-based modification. Given a garment in the video worn on a mannequin, the simulation generates a garment initial shape by exploiting the mannequin shapes estimated from the video. The modification then deforms the simulated 3D shape into such a shape that matches the garment 2D boundary extracted from the video. According to the matching correspondences between the vertices on the shape and the points on the boundary, the modification is implemented by attracting the matched vertices and their neighboring vertices. For best-matching correspondences and efficient performance, three criteria are introduced to select the candidate vertices for matching. Since modifying each garment shape independently may cause inter-frame oscillations, changes by the modification are also propagated from one frame to the next frame. As a result, the generated garment 3D shape sequence is stable and similar to the garment video sequence. We demonstrate the effectiveness of our prototype with a number of examples.

  6. Measuring dwell time percentage from head-mounted eye-tracking data--comparison of a frame-by-frame and a fixation-by-fixation analysis.

    Science.gov (United States)

    Vansteenkiste, Pieter; Cardon, Greet; Philippaerts, Renaat; Lenoir, Matthieu

    2015-01-01

    Although analysing software for eye-tracking data has significantly improved in the past decades, the analysis of gaze behaviour recorded with head-mounted devices is still challenging and time-consuming. Therefore, new methods have to be tested to reduce the analysis workload while maintaining accuracy and reliability. In this article, dwell time percentages to six areas of interest (AOIs), of six participants cycling on four different roads, were analysed both frame-by-frame and in a 'fixation-by-fixation' manner. The fixation-based method is similar to the classic frame-by-frame method but instead of assigning frames, fixations are assigned to one of the AOIs. Although some considerable differences were found between the two methods, a Pearson correlation of 0.930 points out a good validity of the fixation-by-fixation method. For the analysis of gaze behaviour over an extended period of time, the fixation-based approach is a valuable and time-saving alternative for the classic frame-by-frame analysis.

  7. Monocular zones in stereoscopic scenes: A useful source of information for human binocular vision?

    Science.gov (United States)

    Harris, Julie M.

    2010-02-01

    When an object is closer to an observer than the background, the small differences between right and left eye views are interpreted by the human brain as depth. This basic ability of the human visual system, called stereopsis, lies at the core of all binocular three-dimensional (3-D) perception and related technological display development. To achieve stereopsis, it is traditionally assumed that corresponding locations in the right and left eye's views must first be matched, then the relative differences between right and left eye locations are used to calculate depth. But this is not the whole story. At every object-background boundary, there are regions of the background that only one eye can see because, in the other eye's view, the foreground object occludes that region of background. Such monocular zones do not have a corresponding match in the other eye's view and can thus cause problems for depth extraction algorithms. In this paper I will discuss evidence, from our knowledge of human visual perception, illustrating that monocular zones do not pose problems for our human visual systems, rather, our visual systems can extract depth from such zones. I review the relevant human perception literature in this area, and show some recent data aimed at quantifying the perception of depth from monocular zones. The paper finishes with a discussion of the potential importance of considering monocular zones, for stereo display technology and depth compression algorithms.

  8. Infants' ability to respond to depth from the retinal size of human faces: comparing monocular and binocular preferential-looking.

    Science.gov (United States)

    Tsuruhara, Aki; Corrow, Sherryse; Kanazawa, So; Yamaguchi, Masami K; Yonas, Albert

    2014-11-01

    To examine sensitivity to pictorial depth cues in young infants (4 and 5 months-of-age), we compared monocular and binocular preferential looking to a display on which two faces were equidistantly presented and one was larger than the other, depicting depth from the size of human faces. Because human faces vary little in size, the correlation between retinal size and distance can provide depth information. As a result, adults perceive a larger face as closer than a smaller one. Although binocular information for depth provided information that the faces in our display were equidistant, under monocular viewing, no such information was provided. Rather, the size of the faces indicated that one was closer than the other. Infants are known to look longer at apparently closer objects. Therefore, we hypothesized that infants would look longer at a larger face in the monocular than in the binocular condition if they perceived depth from the size of human faces. Because the displays were identical in the two conditions, any difference in looking-behavior between monocular and binocular viewing indicated sensitivity to depth information. Results showed that 5-month-old infants preferred the larger, apparently closer, face in the monocular condition compared to the binocular condition when static displays were presented. In addition, when presented with a dynamic display, 4-month-old infants showed a stronger 'closer' preference in the monocular condition compared to the binocular condition. This was not the case when the faces were inverted. These results suggest that even 4-month-old infants respond to depth information from a depth cue that may require learning, the size of faces. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Infants’ ability to respond to depth from the retinal size of human faces: Comparing monocular and binocular preferential-looking

    Science.gov (United States)

    Tsuruhara, Aki; Corrow, Sherryse; Kanazawa, So; Yamaguchi, Masami K.; Yonas, Albert

    2014-01-01

    To examine sensitivity to pictorial depth cues in young infants (4 and 5 months-of-age), we compared monocular and binocular preferential looking to a display on which two faces were equidistantly presented and one was larger than the other, depicting depth from the size of human faces. Because human faces vary little in size, the correlation between retinal size and distance can provide depth information. As a result, adults perceive a larger face as closer than a smaller one. Although binocular information for depth provided information that the faces in our display were equidistant, under monocular viewing, no such information was provided. Rather, the size of the faces indicated that one was closer than the other. Infants are known to look longer at apparently closer objects. Therefore, we hypothesized that infants would look longer at a larger face in the monocular than in the binocular condition if they perceived depth from the size of human faces. Because the displays were identical in the two conditions, any difference in looking-behavior between monocular and binocular viewing indicated sensitivity to depth information. Results showed that 5-month-old infants preferred the larger, apparently closer, face in the monocular condition compared to the binocular condition when static displays were presented. In addition, when presented with a dynamic display, 4-month-old infants showed a stronger ‘closer’ preference in the monocular condition compared to the binocular condition. This was not the case when the faces were inverted. These results suggest that even 4-month-old infants respond to depth information from a depth cue that may require learning, the size of faces. PMID:25113916

  10. Cross-axis adaptation improves 3D vestibulo-ocular reflex alignment during chronic stimulation via a head-mounted multichannel vestibular prosthesis.

    Science.gov (United States)

    Dai, Chenkai; Fridman, Gene Y; Chiang, Bryce; Davidovics, Natan S; Melvin, Thuy-Anh; Cullen, Kathleen E; Della Santina, Charles C

    2011-05-01

    By sensing three-dimensional (3D) head rotation and electrically stimulating the three ampullary branches of a vestibular nerve to encode head angular velocity, a multichannel vestibular prosthesis (MVP) can restore vestibular sensation to individuals disabled by loss of vestibular hair cell function. However, current spread to afferent fibers innervating non-targeted canals and otolith end organs can distort the vestibular nerve activation pattern, causing misalignment between the perceived and actual axis of head rotation. We hypothesized that over time, central neural mechanisms can adapt to correct this misalignment. To test this, we rendered five chinchillas vestibular deficient via bilateral gentamicin treatment and unilaterally implanted them with a head-mounted MVP. Comparison of 3D angular vestibulo-ocular reflex (aVOR) responses during 2 Hz, 50°/s peak horizontal sinusoidal head rotations in darkness on the first, third, and seventh days of continual MVP use revealed that eye responses about the intended axis remained stable (at about 70% of the normal gain) while misalignment improved significantly by the end of 1 week of prosthetic stimulation. A comparable time course of improvement was also observed for head rotations about the other two semicircular canal axes and at every stimulus frequency examined (0.2-5 Hz). In addition, the extent of disconjugacy between the two eyes progressively improved during the same time window. These results indicate that the central nervous system rapidly adapts to multichannel prosthetic vestibular stimulation to markedly improve 3D aVOR alignment within the first week after activation. Similar adaptive improvements are likely to occur in other species, including humans.

  11. Monocular Blindness: Is It a Handicap?

    Science.gov (United States)

    Knoth, Sharon

    1995-01-01

    Students with monocular vision may be in need of special assistance and should be evaluated by a multidisciplinary team to determine whether the visual loss is affecting educational performance. This article discusses the student's eligibility for special services, difficulty in performing depth perception tasks, difficulties in specific classroom…

  12. Disparity biasing in depth from monocular occlusions.

    Science.gov (United States)

    Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S

    2011-07-15

    Monocular occlusions have been shown to play an important role in stereopsis. Among other contributions to binocular depth perception, monocular occlusions can create percepts of illusory occluding surfaces. It has been argued that the precise location in depth of these illusory occluders is based on the constraints imposed by occlusion geometry. Tsirlin et al. (2010) proposed that when these constraints are weak, the depth of the illusory occluder can be biased by a neighboring disparity-defined feature. In the present work we test this hypothesis using a variety of stimuli. We show that when monocular occlusions provide only partial constraints on the magnitude of depth of the illusory occluders, the perceived depth of the occluders can be biased by disparity-defined features in the direction unrestricted by the occlusion geometry. Using this disparity bias phenomenon we also show that in illusory occluder stimuli where disparity information is present, but weak, most observers rely on disparity while some use occlusion information instead to specify the depth of the illusory occluder. Taken together our experiments demonstrate that in binocular depth perception disparity and monocular occlusion cues interact in complex ways to resolve perceptual ambiguity. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, J.J.; Albertazzi, L.; Doorn, A.J. van; Ee, R. van; Grind, W.A. van de; Kappers, A.M.L.; Lappin, J.S.; Norman, J.F.; Oomes, A.H.J.; Pas, S.F. te; Phillips, F.; Pont, S.C.; Richards, W.A.; Todd, J.T.; Verstraten, F.A.J.; Vries, S.C. de

    2010-01-01

    The issue of the existence of planes—understood as the carriers of a nexus of straight lines—in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  14. Recovery of neurofilament following early monocular deprivation

    Directory of Open Access Journals (Sweden)

    Timothy P O'Leary

    2012-04-01

    Full Text Available A brief period of monocular deprivation in early postnatal life can alter the structure of neurons within deprived-eye-receiving layers of the dorsal lateral geniculate nucleus. The modification of structure is accompanied by a marked reduction in labeling for neurofilament, a protein that composes the stable cytoskeleton and that supports neuron structure. This study examined the extent of neurofilament recovery in monocularly deprived cats that either had their deprived eye opened (binocular recovery, or had the deprivation reversed to the fellow eye (reverse occlusion. The degree to which recovery was dependent on visually-driven activity was examined by placing monocularly deprived animals in complete darkness (dark rearing. The loss of neurofilament and the reduction of soma size caused by monocular deprivation were both ameliorated equally following either binocular recovery or reverse occlusion for 8 days. Though monocularly deprived animals placed in complete darkness showed recovery of soma size, there was a generalized loss of neurofilament labeling that extended to originally non-deprived layers. Overall, these results indicate that recovery of soma size is achieved by removal of the competitive disadvantage of the deprived eye, and occurred even in the absence of visually-driven activity. Recovery of neurofilament occurred when the competitive disadvantage of the deprived eye was removed, but unlike the recovery of soma size, was dependent upon visually-driven activity. The role of neurofilament in providing stable neural structure raises the intriguing possibility that dark rearing, which reduced overall neurofilament levels, could be used to reset the deprived visual system so as to make it more ameliorable with treatment by experiential manipulations.

  15. 头戴式视线跟踪系统的一点标定方法%ONE-POINT CALIBRATION METHOD FOR HEAD-MOUNTED EYE TRACKING SYSTEM

    Institute of Scientific and Technical Information of China (English)

    侯树卫; 李斌; 夏小宝

    2014-01-01

    标定方法是视线跟踪技术中的关键环节,直接影响跟踪精度和用户体验。目前头戴式跟踪系统所使用标定方法,需要多个标定点进行标定。为更快、更方便地进行标定,提出一种方法,只需一个标定点,便可提取足够的标定信息完成标定过程。该方法可适用于目前的多种映射方法,如DLT方法、多项式方法、神经网络方法等,标定时间仅需10 s,精度可达1°,与多点标定相比,效率显著提高,而精度无明显差异。此外,使用一种新的神经网络模型:ELM(极端学习机)实现了神经网络标定方法,ELM的快速学习性能,显著缩短了神经网络训练时间。%The calibration method affects the tracking accuracy and user experience directly,so it is a key link in gaze tracking technolo-gy.Current calibration method used by the head-mounted tracking system requires multiple calibration points to accomplish this process.Inorder to calibrate faster and more convenient,we present a method which only requires one calibration point for extracting sufficient calibrationinformation to complete the calibration process.This method can be applied to a variety of mapping methods used at present,such as the DLTmethod,the polynomial method,and the neural network method,etc.The calibration time takes only 10 s,and the precision reaches 1°.Compared with multi-point calibration,it significantly improves the efficiency with no noticeable difference in precision.In addition,we usea new neural network model,the ELM (extreme learning machine),to realise the neural network calibration.ELM’s fast learningperformance remarkably shortens the training time of the neural network.

  16. Monocular and binocular depth discrimination thresholds.

    Science.gov (United States)

    Kaye, S B; Siddiqui, A; Ward, A; Noonan, C; Fisher, A C; Green, J R; Brown, M C; Wareing, P A; Watt, P

    1999-11-01

    Measurement of stereoacuity at varying distances, by real or simulated depth stereoacuity tests, is helpful in the evaluation of patients with binocular imbalance or strabismus. Although the cue of binocular disparity underpins stereoacuity tests, there may be variable amounts of other binocular and monocular cues inherent in a stereoacuity test. In such circumstances, a combined monocular and binocular threshold of depth discrimination may be measured--stereoacuity conventionally referring to the situation where binocular disparity giving rise to retinal disparity is the only cue present. A child-friendly variable distance stereoacuity test (VDS) was developed, with a method for determining the binocular depth threshold from the combined monocular and binocular threshold of depth of discrimination (CT). Subjects with normal binocular function, reduced binocular function, and apparently absent binocularity were included. To measure the threshold of depth discrimination, subjects were required by means of a hand control to align two electronically controlled spheres at viewing distances of 1, 3, and 6m. Stereoacuity was also measured using the TNO, Frisby, and Titmus stereoacuity tests. BTs were calculated according to the function BT= arctan (1/tan alphaC - 1/tan alphaM)(-1), where alphaC and alphaM are the angles subtended at the nodal points by objects situated at the monocular threshold (alphaM) and the combined monocular-binocular threshold (alphaC) of discrimination. In subjects with good binocularity, BTs were similar to their combined thresholds, whereas subjects with reduced and apparently absent binocularity had binocular thresholds 4 and 10 times higher than their combined thresholds (CT). The VDS binocular thresholds showed significantly higher correlation and agreement with the TNO test and the binocular thresholds of the Frisby and Titmus tests, than the corresponding combined thresholds (p = 0.0019). The VDS was found to be an easy to use real depth

  17. Full parallax multifocus three-dimensional display using a slanted light source array

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Eun-Hee; Kim, Dong-Wook

    2011-11-01

    A new multifocus three-dimensional display which gives full parallax monocular depth cue and omni-directional focus is developed with the least parallax images. The key factor of this display system is a slanted array of light-emitting diode light source, not a horizontal array. In this system, defocus effects are experimentally achieved and the monocular focus effect is tested by four parallax images and even two parallax images. The full parallax multifocus three-dimensional display is more applicable to monocular or binocular augmented reality three-dimensional display in the modification to a see-through type.

  18. Quantitative perceived depth from sequential monocular decamouflage.

    Science.gov (United States)

    Brooks, K R; Gillam, B J

    2006-03-01

    We present a novel binocular stimulus without conventional disparity cues whose presence and depth are revealed by sequential monocular stimulation (delay > or = 80 ms). Vertical white lines were occluded as they passed behind an otherwise camouflaged black rectangular target. The location (and instant) of the occlusion event, decamouflaging the target's edges, differed in the two eyes. Probe settings to match the depth of the black rectangular target showed a monotonic increase with simulated depth. Control tests discounted the possibility of subjects integrating retinal disparities over an extended temporal window or using temporal disparity. Sequential monocular decamouflage was found to be as precise and accurate as conventional simultaneous stereopsis with equivalent depths and exposure durations.

  19. Monocular Vision SLAM for Indoor Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Koray Çelik

    2013-01-01

    Full Text Available This paper presents a novel indoor navigation and ranging strategy via monocular camera. By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM. The navigation strategy assumes an indoor or indoor-like manmade environment whose layout is previously unknown, GPS-denied, representable via energy based feature points, and straight architectural lines. We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV with sophisticated on-board image processing and SLAM capabilities. Building and enabling such a small aerial vehicle to fly in tight corridors is a significant technological challenge, especially in the absence of GPS signals and with limited sensing options. Experimental results show that the system is only limited by the capabilities of the camera and environmental entropy.

  20. Outdoor autonomous navigation using monocular vision

    OpenAIRE

    Royer, Eric; Bom, Jonathan; Dhome, Michel; Thuilot, Benoît; Lhuillier, Maxime; Marmoiton, Francois

    2005-01-01

    International audience; In this paper, a complete system for outdoor robot navigation is presented. It uses only monocular vision. The robot is first guided on a path by a human. During this learning step, the robot records a video sequence. From this sequence, a three dimensional map of the trajectory and the environment is built. When this map has been computed, the robot is able to follow the same trajectory by itself. Experimental results carried out with an urban electric vehicle are sho...

  1. Monocular alignment in different depth planes.

    Science.gov (United States)

    Shimono, Koichi; Wade, Nicholas J

    2002-04-01

    We examined (a) whether vertical lines at different physical horizontal positions in the same eye can appear to be aligned, and (b), if so, whether the difference between the horizontal positions of the aligned vertical lines can vary with the perceived depth between them. In two experiments, each of two vertical monocular lines was presented (in its respective rectangular area) in one field of a random-dot stereopair with binocular disparity. In Experiment 1, 15 observers were asked to align a line in an upper area with a line in a lower area. The results indicated that when the lines appeared aligned, their horizontal physical positions could differ and the direction of the difference coincided with the type of disparity of the rectangular areas; this is not consistent with the law of the visual direction of monocular stimuli. In Experiment 2, 11 observers were asked to report relative depth between the two lines and to align them. The results indicated that the difference of the horizontal position did not covary with their perceived relative depth, suggesting that the visual direction and perceived depth of the monocular line are mediated via different mechanisms.

  2. Visual SLAM for Handheld Monocular Endoscope.

    Science.gov (United States)

    Grasa, Óscar G; Bernal, Ernesto; Casado, Santiago; Gil, Ismael; Montiel, J M M

    2014-01-01

    Simultaneous localization and mapping (SLAM) methods provide real-time estimation of 3-D models from the sole input of a handheld camera, routinely in mobile robotics scenarios. Medical endoscopic sequences mimic a robotic scenario in which a handheld camera (monocular endoscope) moves along an unknown trajectory while observing an unknown cavity. However, the feasibility and accuracy of SLAM methods have not been extensively validated with human in vivo image sequences. In this work, we propose a monocular visual SLAM algorithm tailored to deal with medical image sequences in order to provide an up-to-scale 3-D map of the observed cavity and the endoscope trajectory at frame rate. The algorithm is validated over synthetic data and human in vivo sequences corresponding to 15 laparoscopic hernioplasties where accurate ground-truth distances are available. It can be concluded that the proposed procedure is: 1) noninvasive, because only a standard monocular endoscope and a surgical tool are used; 2) convenient, because only a hand-controlled exploratory motion is needed; 3) fast, because the algorithm provides the 3-D map and the trajectory in real time; 4) accurate, because it has been validated with respect to ground-truth; and 5) robust to inter-patient variability, because it has performed successfully over the validation sequences.

  3. VLSI design of 3D display processing chip for binocular stereo displays

    Institute of Scientific and Technical Information of China (English)

    Ge Chenyang; Zheng Nanning

    2010-01-01

    In order to develop the core chip supporting binocular stereo displays for head mounted display(HMD)and glasses-TV,a very large scale integrated(VLSI)design scheme is proposed by using a pipeline architecture for 3D display processing chip(HMD100).Some key techniques including stereo display processing and high precision video scaling based bicubic interpolation,and their hardware implementations which improve the image quality are presented.The proposed HMD100 chip is verified by the field-programmable gate array(FPGA).As one of innovative and high integration SoC chips,HMD100 is designed by a digital and analog mixed circuit.It can support binocular stereo display,has better scaling effect and integration.Hence it is applicable in virtual reality(VR),3D games and other microdisplay domains.

  4. Bayesian depth estimation from monocular natural images.

    Science.gov (United States)

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2017-05-01

    Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.

  5. Human skeleton proportions from monocular data

    Institute of Scientific and Technical Information of China (English)

    PENG En; LI Ling

    2006-01-01

    This paper introduces a novel method for estimating the skeleton proportions ofa human figure from monocular data.The proposed system will first automatically extract the key frames and recover the perspective camera model from the 2D data.The human skeleton proportions are then estimated from the key frames using the recovered camera model without posture reconstruction. The proposed method is tested to be simple, fast and produce satisfactory results for the input data. The human model with estimated proportions can be used in future research involving human body modeling or human motion reconstruction.

  6. Perception of Spatial Features with Stereoscopic Displays.

    Science.gov (United States)

    1980-10-24

    aniseikonia (differences in retinal image size in the two eyes) are of little significance because only monocular perception of the display is required for...perception as a result of such factors as aniseikonia , uncor- rected refractive errors, or phorias results in reduced stereopsis. However, because

  7. Cognitive Cost of Using Augmented Reality Displays.

    Science.gov (United States)

    Baumeister, James; Ssin, Seung Youb; ElSayed, Neven A M; Dorrian, Jillian; Webb, David P; Walsh, James A; Simon, Timothy M; Irlitti, Andrew; Smith, Ross T; Kohler, Mark; Thomas, Bruce H

    2017-11-01

    This paper presents the results of two cognitive load studies comparing three augmented reality display technologies: spatial augmented reality, the optical see-through Microsoft HoloLens, and the video see-through Samsung Gear VR. In particular, the two experiments focused on isolating the cognitive load cost of receiving instructions for a button-pressing procedural task. The studies employed a self-assessment cognitive load methodology, as well as an additional dual-task cognitive load methodology. The results showed that spatial augmented reality led to increased performance and reduced cognitive load. Additionally, it was discovered that a limited field of view can introduce increased cognitive load requirements. The findings suggest that some of the inherent restrictions of head-mounted displays materialize as increased user cognitive load.

  8. Development of three types of multifocus 3D display

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong Wook

    2011-06-01

    Three types of multi-focus(MF) 3D display are developed and possibility about monocular depth cue is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving multi-focus function, we developed 3D display system for each eye, which can satisfy accommodation to displayed virtual objects within defined depth. The first MF 3D display is developed via laser scanning method, the second MF 3D display uses LED array for light source, and the third MF 3D display uses slated LED array for full parallax monocular depth cue. The full parallax MF 3D display system gives omnidirectional focus effect. The proposed 3D display systems have a possibility of solving eye fatigue problem that comes from the mismatch between the accommodation of each eye and the convergence of two eyes. The monocular accommodation is tested and a proof of the satisfaction of the full parallax accommodation is given as a result of the proposed full parallax MF 3D display system. We achieved a result that omni-directional focus adjustment is possible via parallax images.

  9. New ultraportable display technology and applications

    Science.gov (United States)

    Alvelda, Phillip; Lewis, Nancy D.

    1998-08-01

    MicroDisplay devices are based on a combination of technologies rooted in the extreme integration capability of conventionally fabricated CMOS active-matrix liquid crystal display substrates. Customized diffraction grating and optical distortion correction technology for lens-system compensation allow the elimination of many lenses and systems-level components. The MicroDisplay Corporation's miniature integrated information display technology is rapidly leading to many new defense and commercial applications. There are no moving parts in MicroDisplay substrates, and the fabrication of the color generating gratings, already part of the CMOS circuit fabrication process, is effectively cost and manufacturing process-free. The entire suite of the MicroDisplay Corporation's technologies was devised to create a line of application- specific integrated circuit single-chip display systems with integrated computing, memory, and communication circuitry. Next-generation portable communication, computer, and consumer electronic devices such as truly portable monitor and TV projectors, eyeglass and head mounted displays, pagers and Personal Communication Services hand-sets, and wristwatch-mounted video phones are among the may target commercial markets for MicroDisplay technology. Defense applications range from Maintenance and Repair support, to night-vision systems, to portable projectors for mobile command and control centers.

  10. Eyegaze Detection from Monocular Camera Image for Eyegaze Communication System

    Science.gov (United States)

    Ohtera, Ryo; Horiuchi, Takahiko; Kotera, Hiroaki

    An eyegaze interface is one of the key technologies as an input device in the ubiquitous-computing society. In particular, an eyegaze communication system is very important and useful for severely handicapped users such as quadriplegic patients. Most of the conventional eyegaze tracking algorithms require specific light sources, equipment and devices. In this study, a simple eyegaze detection algorithm is proposed using a single monocular video camera. The proposed algorithm works under the condition of fixed head pose, but slight movement of the face is accepted. In our system, we assume that all users have the same eyeball size based on physiological eyeball models. However, we succeed to calibrate the physiologic movement of the eyeball center depending on the gazing direction by approximating it as a change in the eyeball radius. In the gaze detection stage, the iris is extracted from a captured face frame by using the Hough transform. Then, the eyegaze angle is derived by calculating the Euclidean distance of the iris centers between the extracted frame and a reference frame captured in the calibration process. We apply our system to an eyegaze communication interface, and verified the performance through key typing experiments with a visual keyboard on display.

  11. Reversible monocular cataract simulating amaurosis fugax.

    Science.gov (United States)

    Paylor, R R; Selhorst, J B; Weinberg, R S

    1985-07-01

    In a patient having brittle, juvenile-onset diabetes, transient monocular visual loss occurred repeatedly whenever there were wide fluctuations in serum glucose. Amaurosis fugax was suspected. The visual loss differed, however, in that it persisted over a period of hours to several days. Direct observation eventually revealed that the relatively sudden change in vision of one eye was associated with opacification of the lens and was not accompanied by an afferent pupillary defect. Presumably, a hyperosmotic gradient had developed with the accumulation of glucose and sorbitol within the lens. Water was drawn inward, altering the composition of the lens fibers and thereby lowering the refractive index, forming a reversible cataract. Hypoglycemia is also hypothesized to have played a role in the formation of a higher osmotic gradient. The unilaterality of the cataract is attributed to variation in the permeability of asymmetric posterior subcapsular cataracts.

  12. Integrated Display and Environmental Awareness System - System Architecture Definition

    Science.gov (United States)

    Doule, Ondrej; Miranda, David; Hochstadt, Jake

    2017-01-01

    The Integrated Display and Environmental Awareness System (IDEAS) is an interdisciplinary team project focusing on the development of a wearable computer and Head Mounted Display (HMD) based on Commercial-Off-The-Shelf (COTS) components for the specific application and needs of NASA technicians, engineers and astronauts. Wearable computers are on the verge of utilization trials in daily life as well as industrial environments. The first civil and COTS wearable head mounted display systems were introduced just a few years ago and they probed not only technology readiness in terms of performance, endurance, miniaturization, operability and usefulness but also maturity of practice in perspective of a socio-technical context. Although the main technical hurdles such as mass and power were addressed as improvements on the technical side, the usefulness, practicality and social acceptance were often noted on the side of a broad variety of humans' operations. In other words, although the technology made a giant leap, its use and efficiency still looks for the sweet spot. The first IDEAS project started in January 2015 and was concluded in January 2017. The project identified current COTS systems' capability at minimum cost and maximum applicability and brought about important strategic concepts that will serve further IDEAS-like system development.

  13. Effect of monocular deprivation on rabbit neural retinal cell densities

    Directory of Open Access Journals (Sweden)

    Philip Maseghe Mwachaka

    2015-01-01

    Conclusion: In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  14. Amodal completion with background determines depth from monocular gap stereopsis.

    Science.gov (United States)

    Grove, Philip M; Ben Sachtler, W L; Gillam, Barbara J

    2006-10-01

    Grove, Gillam, and Ono [Grove, P. M., Gillam, B. J., & Ono, H. (2002). Content and context of monocular regions determine perceived depth in random dot, unpaired background and phantom stereograms. Vision Research, 42, 1859-1870] reported that perceived depth in monocular gap stereograms [Gillam, B. J., Blackburn, S., & Nakayama, K. (1999). Stereopsis based on monocular gaps: Metrical encoding of depth and slant without matching contours. Vision Research, 39, 493-502] was attenuated when the color/texture in the monocular gap did not match the background. It appears that continuation of the gap with the background constitutes an important component of the stimulus conditions that allow a monocular gap in an otherwise binocular surface to be responded to as a depth step. In this report we tested this view using the conventional monocular gap stimulus of two identical grey rectangles separated by a gap in one eye but abutting to form a solid grey rectangle in the other. We compared depth seen at the gap for this stimulus with stimuli that were identical except for two additional small black squares placed at the ends of the gap. If the squares were placed stereoscopically behind the rectangle/gap configuration (appearing on the background) they interfered with the perceived depth at the gap. However when they were placed in front of the configuration this attenuation disappeared. The gap and the background were able under these conditions to complete amodally.

  15. Localization of monocular stimuli in different depth planes.

    Science.gov (United States)

    Shimono, Koichi; Tam, Wa James; Asakura, Nobuhiko; Ohmi, Masao

    2005-09-01

    We examined the phenomenon in which two physically aligned monocular stimuli appear to be non-collinear when each of them is located in binocular regions that are at different depth planes. Using monocular bars embedded in binocular random-dot areas that are at different depths, we manipulated properties of the binocular areas and examined their effect on the perceived direction and depth of the monocular stimuli. Results showed that (1) the relative visual direction and perceived depth of the monocular bars depended on the binocular disparity and the dot density of the binocular areas, and (2) the visual direction, but not the depth, depended on the width of the binocular regions. These results are consistent with the hypothesis that monocular stimuli are treated by the visual system as binocular stimuli that have acquired the properties of their binocular surrounds. Moreover, partial correlation analysis suggests that the visual system utilizes both the disparity information of the binocular areas and the perceived depth of the monocular bars in determining the relative visual direction of the bars.

  16. Military market for flat panel displays

    Science.gov (United States)

    Desjardins, Daniel D.; Hopper, Darrel G.

    1997-07-01

    This paper addresses the number, function and size of primary military displays and establishes a basis to determine the opportunities for technology insertion in the immediate future and into the next millennium. The military displays market is specified by such parameters as active area and footprint size, and other characteristics such as luminance, gray scale, resolution, color capability and night vision imaging system capability. A select grouping of funded, future acquisitions, planned and predicted cockpit kits, and form-fit-function upgrades are taken into account. It is the intent of this paper to provide an overview of the DoD niche market, allowing both government and industry a timely reference to insure meeting DoD requirements for flat-panel displays on schedule and in a cost-effective manner. The aggregate DoD market for direct view displays is presently estimated to be in excess of 157,000. Helmet/head mounted displays will add substantially to this total. The vanishing vendor syndrome for older display technologies is becoming a growing, pervasive problem throughout DoD, which consequently just leverage the more modern display technologies being developed for civil-commercial markets.

  17. Mathematical Basis of Knowledge Discovery and Autonomous Intelligent Architectures - Eye-Tracking and Head-Mounted Display/Tracking Computer System for the Remote Control of Robots and Manipulators

    Science.gov (United States)

    2005-12-14

    etc; - Simulators of real time control process (nuclear station, aviation , and others); - Remote control of camera-head (Web-cameras, security ets...capabilities for man-operator using HTS & HTS+. Compare with the traditional HTS for aviation purposes, for the robot telecontrol it is essential to...a rate should not be worse than 1024x1024 pixels. The experiments showed that estimating thresholds of stereopsis with high accuracy requires

  18. A case study of new assessment and training of unilateral spatial neglect in stroke patients: effect of visual image transformation and visual stimulation by using a head mounted display system (HMD

    Directory of Open Access Journals (Sweden)

    Sugihara Shunichi

    2010-05-01

    Full Text Available Abstract Background Unilateral spatial neglect (USN is most damaging to an older stroke patient who also has a lower performance in their activities of daily living or those elderly who are still working. The purpose of this study was to understand more accurately pathology of USN using a new HMD system. Methods Two stroke patients (Subject A and B participated in this study after gaining their informed consent and they all had Left USN as determined by clinical tests. Assessments of USN were performed by using the common clinical test (the line cancellation test and six special tests by using HMD system in the object-centered coordinates (OC condition and the egocentric coordinates (EC condition. OC condition focused the test sheet only by a CCD. EC condition was that CCD can always follow the subject's movement. Moreover, the study focused on the effect of the reduced image condition of real image and the arrows. Results In Patient A who performed the common test and special tests of OC and EC conditions, the results showed that for the line cancellation test under the common condition, both of the percentage of the correct answers at the right and left sides in the test sheet was 100 percent. However, in the OC condition, the percentage of the correct answers at the left side in the test sheet was 44 percent and the right side was 94 percent. In the EC condition, the left side was 61 percent and the right side was 67 percent. In Patient B, according to the result of the use of reduced image condition and the arrows condition by HMD system, these line cancellation scores more increased than the score of the common test. Conclusions The results showed that the assessment of USN using an HMD system may clarify the left neglect area which cannot be easily observed in the clinical evaluation for USN. HMD may be able to produce an artificially versatile environment as compared to the common clinical evaluation and treatment.

  19. Improved layer-based method for rapid hologram generation and real-time interactive holographic display applications.

    Science.gov (United States)

    Chen, J-S; Chu, D P

    2015-07-13

    Layer-based method has been proposed as an efficient approach to calculate holograms for holographic image display. This paper further improves its calculation speed and depth cues quality by introducing three different techniques, an improved coding scheme, a multilayer depth- fused 3D method and a fraction method. As a result the total computation time is reduced more than 4 times, and holographic images with accommodation cue are calculated in real time to interactions with the displayed image in a proof-of-concept setting of head-mounted holographic displays.

  20. Separating monocular and binocular neural mechanisms mediating chromatic contextual interactions.

    Science.gov (United States)

    D'Antona, Anthony D; Christiansen, Jens H; Shevell, Steven K

    2014-04-17

    When seen in isolation, a light that varies in chromaticity over time is perceived to oscillate in color. Perception of that same time-varying light may be altered by a surrounding light that is also temporally varying in chromaticity. The neural mechanisms that mediate these contextual interactions are the focus of this article. Observers viewed a central test stimulus that varied in chromaticity over time within a larger surround that also varied in chromaticity at the same temporal frequency. Center and surround were presented either to the same eye (monocular condition) or to opposite eyes (dichoptic condition) at the same frequency (3.125, 6.25, or 9.375 Hz). Relative phase between center and surround modulation was varied. In both the monocular and dichoptic conditions, the perceived modulation depth of the central light depended on the relative phase of the surround. A simple model implementing a linear combination of center and surround modulation fit the measurements well. At the lowest temporal frequency (3.125 Hz), the surround's influence was virtually identical for monocular and dichoptic conditions, suggesting that at this frequency, the surround's influence is mediated primarily by a binocular neural mechanism. At higher frequencies, the surround's influence was greater for the monocular condition than for the dichoptic condition, and this difference increased with temporal frequency. Our findings show that two separate neural mechanisms mediate chromatic contextual interactions: one binocular and dominant at lower temporal frequencies and the other monocular and dominant at higher frequencies (6-10 Hz).

  1. Hazard detection with a monocular bioptic telescope.

    Science.gov (United States)

    Doherty, Amy L; Peli, Eli; Luo, Gang

    2015-09-01

    The safety of bioptic telescopes for driving remains controversial. The ring scotoma, an area to the telescope eye due to the telescope magnification, has been the main cause of concern. This study evaluates whether bioptic users can use the fellow eye to detect in hazards driving videos that fall in the ring scotoma area. Twelve visually impaired bioptic users watched a series of driving hazard perception training videos and responded as soon as they detected a hazard while reading aloud letters presented on the screen. The letters were placed such that when reading them through the telescope the hazard fell in the ring scotoma area. Four conditions were tested: no bioptic and no reading, reading without bioptic, reading with a bioptic that did not occlude the fellow eye (non-occluding bioptic), and reading with a bioptic that partially-occluded the fellow eye. Eight normally sighted subjects performed the same task with the partially occluding bioptic detecting lateral hazards (blocked by the device scotoma) and vertical hazards (outside the scotoma) to further determine the cause-and-effect relationship between hazard detection and the fellow eye. There were significant differences in performance between conditions: 83% of hazards were detected with no reading task, dropping to 67% in the reading task with no bioptic, to 50% while reading with the non-occluding bioptic, and 34% while reading with the partially occluding bioptic. For normally sighted, detection of vertical hazards (53%) was significantly higher than lateral hazards (38%) with the partially occluding bioptic. Detection of driving hazards is impaired by the addition of a secondary reading like task. Detection is further impaired when reading through a monocular telescope. The effect of the partially-occluding bioptic supports the role of the non-occluded fellow eye in compensating for the ring scotoma. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  2. Monocular Road Detection Using Structured Random Forest

    Directory of Open Access Journals (Sweden)

    Liang Xiao

    2016-05-01

    Full Text Available Road detection is a key task for autonomous land vehicles. Monocular vision-based road detection algorithms are mostly based on machine learning approaches and are usually cast as classification problems. However, the pixel-wise classifiers are faced with the ambiguity caused by changes in road appearance, illumination and weather. An effective way to reduce the ambiguity is to model the contextual information with structured learning and prediction. Currently, the widely used structured prediction model in road detection is the Markov random field or conditional random field. However, the random field-based methods require additional complex optimization after pixel-wise classification, making them unsuitable for real-time applications. In this paper, we present a structured random forest-based road-detection algorithm which is capable of modelling the contextual information efficiently. By mapping the structured label space to a discrete label space, the test function of each split node can be trained in a similar way to that of the classical random forests. Structured random forests make use of the contextual information of image patches as well as the structural information of the labels to get more consistent results. Besides this benefit, by predicting a batch of pixels in a single classification, the structured random forest-based road detection can be much more efficient than the conventional pixel-wise random forest. Experimental results tested on the KITTI-ROAD dataset and data collected in typical unstructured environments show that structured random forest-based road detection outperforms the classical pixel-wise random forest both in accuracy and efficiency.

  3. Ernst Mach and the episode of the monocular depth sensations.

    Science.gov (United States)

    Banks, E C

    2001-01-01

    Although Ernst Mach is widely recognized in psychology for his discovery of the effects of lateral inhibition in the retina ("Mach Bands"), his contributions to the theory of depth perception are not as well known. Mach proposed that steady luminance gradients triggered sensations of depth. He also expanded on Ewald Hering's hypothesis of "monocular depth sensations," arguing that they were subject to the same principle of lateral inhibition as light sensations were. Even after Hermann von Helmholtz's attack on Hering in 1866, Mach continued to develop theories involving the monocular depth sensations, proposing an explanation of perspective drawings in which the mutually inhibiting depth sensations scaled to a mean depth. Mach also contemplated a theory of stereopsis in which monocular depth perception played the primary role. Copyright 2001 John Wiley & Sons, Inc.

  4. A Comparison of Monocular and Binocular Depth Perception in 5- and 7-Month-Old Infants.

    Science.gov (United States)

    Granrud, Carl E.; And Others

    1984-01-01

    Compares monocular depth perception with binocular depth perception in five- to seven-month-old infants. Reaching preferences (dependent measure) observed in the monocular condition indicated sensitivity to monocular depth information. Binocular viewing resulted in a far more consistent tendency to reach for the nearer object. (Author)

  5. Higher resolution stimulus facilitates depth perception: MT+ plays a significant role in monocular depth perception.

    Science.gov (United States)

    Tsushima, Yoshiaki; Komine, Kazuteru; Sawahata, Yasuhito; Hiruma, Nobuyuki

    2014-10-20

    Today, we human beings are facing with high-quality virtual world of a completely new nature. For example, we have a digital display consisting of a high enough resolution that we cannot distinguish from the real world. However, little is known how such high-quality representation contributes to the sense of realness, especially to depth perception. What is the neural mechanism of processing such fine but virtual representation? Here, we psychophysically and physiologically examined the relationship between stimulus resolution and depth perception, with using luminance-contrast (shading) as a monocular depth cue. As a result, we found that a higher resolution stimulus facilitates depth perception even when the stimulus resolution difference is undetectable. This finding is against the traditional cognitive hierarchy of visual information processing that visual input is processed continuously in a bottom-up cascade of cortical regions that analyze increasingly complex information such as depth information. In addition, functional magnetic resonance imaging (fMRI) results reveal that the human middle temporal (MT+) plays a significant role in monocular depth perception. These results might provide us with not only the new insight of our neural mechanism of depth perception but also the future progress of our neural system accompanied by state-of- the-art technologies.

  6. Monocular tool control, eye dominance, and laterality in New Caledonian crows.

    Science.gov (United States)

    Martinho, Antone; Burns, Zackory T; von Bayern, Auguste M P; Kacelnik, Alex

    2014-12-15

    Tool use, though rare, is taxonomically widespread, but morphological adaptations for tool use are virtually unknown. We focus on the New Caledonian crow (NCC, Corvus moneduloides), which displays some of the most innovative tool-related behavior among nonhumans. One of their major food sources is larvae extracted from burrows with sticks held diagonally in the bill, oriented with individual, but not species-wide, laterality. Among possible behavioral and anatomical adaptations for tool use, NCCs possess unusually wide binocular visual fields (up to 60°), suggesting that extreme binocular vision may facilitate tool use. Here, we establish that during natural extractions, tool tips can only be viewed by the contralateral eye. Thus, maintaining binocular view of tool tips is unlikely to have selected for wide binocular fields; the selective factor is more likely to have been to allow each eye to see far enough across the midsagittal line to view the tool's tip monocularly. Consequently, we tested the hypothesis that tool side preference follows eye preference and found that eye dominance does predict tool laterality across individuals. This contrasts with humans' species-wide motor laterality and uncorrelated motor-visual laterality, possibly because bill-held tools are viewed monocularly and move in concert with eyes, whereas hand-held tools are visible to both eyes and allow independent combinations of eye preference and handedness. This difference may affect other models of coordination between vision and mechanical control, not necessarily involving tools. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Measuring young infants' sensitivity to height-in-the-picture-plane by contrasting monocular and binocular preferential-looking.

    Science.gov (United States)

    Tsuruhara, Aki; Corrow, Sherryse; Kanazawa, So; Yamaguchi, Masami K; Yonas, Albert

    2014-01-01

    To examine young infants' sensitivity to a pictorial depth cue, we compared monocular and binocular preferential looking to objects of which depth was specified by height-in-the-picture-plane. For adults, this cue generates the perception that a lower object is closer than a higher object. This study showed that 4- and 5-month-old infants fixated the lower, apparently closer, figure more often under the monocular than binocular presentation providing evidence of their sensitivity to the pictorial depth cue. Because the displays were identical in the two conditions except for binocular information for depth, the difference in looking-behavior indicated sensitivity to depth information, excluding a possibility that they responded to 2D characteristics. This study also confirmed the usefulness of the method, preferential looking with a monocular and binocular comparison, to examine sensitivity to a pictorial depth cue in young infants, who are too immature to reach reliably for the closer of two objects. © 2013 Wiley Periodicals, Inc.

  8. The Influence of Monocular Spatial Cues on Vergence Eye Movements in Monocular and Binocular Viewing of 3-D and 2-D Stimuli.

    Science.gov (United States)

    Batvinionak, Anton A; Gracheva, Maria A; Bolshakov, Andrey S; Rozhkova, Galina I

    2015-01-01

    The influence of monocular spatial cues on the vergence eye movements was studied in two series of experiments: (I) the subjects were viewing a 3-D video and also its 2-D version-binocularly and monocularly; and (II) in binocular and monocular viewing conditions, the subjects were presented with stationary 2-D stimuli containing or not containing some monocular indications of spatial arrangement. The results of the series (I) showed that, in binocular viewing conditions, the vergence eye movements were only present in the case of 3-D but not 2-D video, while in the course of monocular viewing of 2-D video, some regular vergence eye movements could be revealed, suggesting that the occluded eye position could be influenced by the spatial organization of the scene reconstructed on the basis of the monocular depth information provided by the viewing eye. The data obtained in series (II), in general, seem to support this hypothesis. © The Author(s) 2015.

  9. Short-Term Monocular Deprivation Enhances Physiological Pupillary Oscillations.

    Science.gov (United States)

    Binda, Paola; Lunghi, Claudia

    2017-01-01

    Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark) and task requirements (minimizing body and gaze movements), slow pupil oscillations, "hippus," spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry). This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure) provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

  10. Monocular SLAM for Autonomous Robots with Enhanced Features Initialization

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2014-04-01

    Full Text Available This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM, a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

  11. Monocular SLAM for autonomous robots with enhanced features initialization.

    Science.gov (United States)

    Guerra, Edmundo; Munguia, Rodrigo; Grau, Antoni

    2014-04-02

    This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI) framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM), a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

  12. Short-Term Monocular Deprivation Enhances Physiological Pupillary Oscillations

    Directory of Open Access Journals (Sweden)

    Paola Binda

    2017-01-01

    Full Text Available Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark and task requirements (minimizing body and gaze movements, slow pupil oscillations, “hippus,” spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry. This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

  13. Localization of Head-Mounted Vibrotactile Transducers

    Science.gov (United States)

    2013-02-01

    follicles themselves are aiding in tactor detection and identification as their hair strands are subjected to the stimuli. This result is anecdotal...14 4.4 The Effect of Hair ... targets on a computer screen. The use of tactile feedback in the task produced a quicker motor response than other feedback systems. The tactile

  14. Monocular and binocular edges enhance the perception of stereoscopic slant.

    Science.gov (United States)

    Wardle, Susan G; Palmisano, Stephen; Gillam, Barbara J

    2014-07-01

    Gradients of absolute binocular disparity across a slanted surface are often considered the basis for stereoscopic slant perception. However, perceived stereo slant around a vertical axis is usually slow and significantly under-estimated for isolated surfaces. Perceived slant is enhanced when surrounding surfaces provide a relative disparity gradient or depth step at the edges of the slanted surface, and also in the presence of monocular occlusion regions (sidebands). Here we investigate how different kinds of depth information at surface edges enhance stereo slant about a vertical axis. In Experiment 1, perceived slant decreased with increasing surface width, suggesting that the relative disparity between the left and right edges was used to judge slant. Adding monocular sidebands increased perceived slant for all surface widths. In Experiment 2, observers matched the slant of surfaces that were isolated or had a context of either monocular or binocular sidebands in the frontal plane. Both types of sidebands significantly increased perceived slant, but the effect was greater with binocular sidebands. These results were replicated in a second paradigm in which observers matched the depth of two probe dots positioned in front of slanted surfaces (Experiment 3). A large bias occurred for the surface without sidebands, yet this bias was reduced when monocular sidebands were present, and was nearly eliminated with binocular sidebands. Our results provide evidence for the importance of edges in stereo slant perception, and show that depth from monocular occlusion geometry and binocular disparity may interact to resolve complex 3D scenes. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Disseminated neurocysticercosis presenting as isolated acute monocular painless vision loss

    Directory of Open Access Journals (Sweden)

    Gaurav M Kasundra

    2014-01-01

    Full Text Available Neurocysticercosis, the most common parasitic infection of the nervous system, is known to affect the brain, eyes, muscular tissues and subcutaneous tissues. However, it is very rare for patients with ocular cysts to have concomitant cerebral cysts. Also, the dominant clinical manifestation of patients with cerebral cysts is either seizures or headache. We report a patient who presented with acute monocular painless vision loss due to intraocular submacular cysticercosis, who on investigation had multiple cerebral parenchymal cysticercal cysts, but never had any seizures. Although such a vision loss after initiation of antiparasitic treatment has been mentioned previously, acute monocular vision loss as the presenting feature of ocular cysticercosis is rare. We present a brief review of literature along with this case report.

  16. fMRI investigation of monocular pattern rivalry.

    Science.gov (United States)

    Mendola, Janine D; Buckthought, Athena

    2013-01-01

    In monocular pattern rivalry, a composite image is shown to both eyes. The patient experiences perceptual alternations in which the two stimulus components alternate in clarity or salience. We used fMRI at 3T to image brain activity while participants perceived monocular rivalry passively or indicated their percepts with a task. The stimulus patterns were left/right oblique gratings, face/house composites, or a nonrivalrous control stimulus that did not support the perception of transparency or image segmentation. All stimuli were matched for luminance, contrast, and color. Compared with the control stimulus, the cortical activation for passive viewing of grating rivalry included dorsal and ventral extrastriate cortex, superior and inferior parietal regions, and multiple sites in frontal cortex. When the BOLD signal for the object rivalry task was compared with the grating rivalry task, a similar whole-brain network was engaged, but with significantly greater activity in extrastriate regions, including V3, V3A, fusiform face area (FFA), and parahippocampal place area (PPA). In addition, for the object rivalry task, FFA activity was significantly greater during face-dominant periods whereas parahippocampal place area activity was greater during house-dominant periods. Our results demonstrate that slight stimulus changes that trigger monocular rivalry recruit a large whole-brain network, as previously identified for other forms of bistability. Moreover, the results indicate that rivalry for complex object stimuli preferentially engages extrastriate cortex. We also establish that even with natural viewing conditions, endogenous attentional fluctuations in monocular pattern rivalry will differentially drive object-category-specific cortex, similar to binocular rivalry, but without complete suppression of the nondominant image.

  17. The effect of induced monocular blur on measures of stereoacuity.

    Science.gov (United States)

    Odell, Naomi V; Hatt, Sarah R; Leske, David A; Adams, Wendy E; Holmes, Jonathan M

    2009-04-01

    To determine the effect of induced monocular blur on stereoacuity measured with real depth and random dot tests. Monocular visual acuity deficits (range, 20/15 to 20/1600) were induced with 7 different Bangerter filters (depth tests and Preschool Randot (PSR) and Distance Randot (DR) random dot tests. Stereoacuity results were grouped as either "fine" (60 and 200 arcsec to nil) stereo. Across visual acuity deficits, stereoacuity was more severely degraded with random dot (PSR, DR) than with real depth (Frisby, FD2) tests. Degradation to worse-than-fine stereoacuity consistently occurred at 0.7 logMAR (20/100) or worse for Frisby, 0.1 logMAR (20/25) or worse for PSR, and 0.1 logMAR (20/25) or worse for FD2. There was no meaningful threshold for the DR because worse-than-fine stereoacuity was associated with -0.1 logMAR (20/15). Course/nil stereoacuity was consistently associated with 1.2 logMAR (20/320) or worse for Frisby, 0.8 logMAR (20/125) or worse for PSR, 1.1 logMAR (20/250) or worse for FD2, and 0.5 logMAR (20/63) or worse for DR. Stereoacuity thresholds are more easily degraded by reduced monocular visual acuity with the use of random dot tests (PSR and DR) than real depth tests (Frisby and FD2). We have defined levels of monocular visual acuity degradation associated with fine and nil stereoacuity. These findings have important implications for testing stereoacuity in clinical populations.

  18. A smart telerobotic system driven by monocular vision

    Science.gov (United States)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  19. Building a 3D scanner system based on monocular vision.

    Science.gov (United States)

    Zhang, Zhiyi; Yuan, Lin

    2012-04-10

    This paper proposes a three-dimensional scanner system, which is built by using an ingenious geometric construction method based on monocular vision. The system is simple, low cost, and easy to use, and the measurement results are very precise. To build it, one web camera, one handheld linear laser, and one background calibration board are required. The experimental results show that the system is robust and effective, and the scanning precision can be satisfied for normal users.

  20. Monocular nasal hemianopia from atypical sphenoid wing meningioma.

    Science.gov (United States)

    Stacy, Rebecca C; Jakobiec, Frederick A; Lessell, Simmons; Cestari, Dean M

    2010-06-01

    Neurogenic monocular nasal field defects respecting the vertical midline are quite uncommon. We report a case of a unilateral nasal hemianopia that was caused by compression of the left optic nerve by a sphenoid wing meningioma. Histological examination revealed that the pathology of the meningioma was consistent with that of an atypical meningioma, which carries a guarded prognosis with increased chance of recurrence. The tumor was debulked surgically, and the patient's visual field defect improved.

  1. Indoor monocular mobile robot navigation based on color landmarks

    Institute of Scientific and Technical Information of China (English)

    LUO Yuan; ZHANG Bai-sheng; ZHANG Yi; LI Ling

    2009-01-01

    A robot landmark navigation system based on monocular camera was researched theoretically and experimentally. First the landmark setting and its data structure in programming was given; then the coordinates of them getting by robot and global localization of the robot was described; finally experiments based on Pioneer III mobile robot show that this system can work well at different topographic situation without lose of signposts.

  2. Altered anterior visual system development following early monocular enucleation

    Directory of Open Access Journals (Sweden)

    Krista R. Kelly

    2014-01-01

    Conclusions: The novel finding of an asymmetry in morphology of the anterior visual system following long-term survival from early monocular enucleation indicates altered postnatal visual development. Possible mechanisms behind this altered development include recruitment of deafferented cells by crossing nasal fibres and/or geniculate cell retention via feedback from primary visual cortex. These data highlight the importance of balanced binocular input during postnatal maturation for typical anterior visual system morphology.

  3. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse- Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  4. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse-Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  5. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    Science.gov (United States)

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.

  6. Ocular vergence measurement in projected and collimated simulator displays.

    Science.gov (United States)

    Morahan, P; Meehan, J W; Patterson, J; Hughes, P K

    1998-09-01

    The purpose of this study was to investigate electrooculography (EOG) as a measurement of ocular vergence in both collimated and projected simulator environments. The task required participants to shift their gaze between a central fixation point and a target appearing at one of three eccentricities. EOG was effective in recording ocular vergence. The EOG results were similar between collimated and projected displays, except for differences in vergence changes during lateral movement of the eyes, and ocular excursions downward elicited a greater EOG response than the reverse upward movement. The computer-based technique of recording vergence was found to produce measurable traces from a majority of participants. The technique has potential for further development as a tool for measuring ocular vergence in virtual environments where methods that require the wearing of head-mounted apparatus to track ocular structures (e.g., the pupil), which cannot be worn at the same time as a flight or flight-simulator helmet, are unsuitable.

  7. Use of display technologies for augmented reality enhancement

    Science.gov (United States)

    Harding, Kevin

    2016-06-01

    Augmented reality (AR) is seen as an important tool for the future of user interfaces as well as training applications. An important application area for AR is expected to be in the digitization of training and worker instructions used in the Brilliant Factory environment. The transition of work instructions methods from printed pages in a book or taped to a machine to virtual simulations is a long step with many challenges along the way. A variety of augmented reality tools are being explored today for industrial applications that range from simple programmable projections in the work space to 3D displays and head mounted gear. This paper will review where some of these tool are today and some of the pros and cons being considered for the future worker environment.

  8. 基于视网膜投影显示的头盔显示器设计%Helmet-mounted displays based on retinal projection display

    Institute of Scientific and Technical Information of China (English)

    杨敏娜; 郭忠达; 阳志强

    2012-01-01

    传统的透射型头盔显示器虽然可以显示虚拟图像,但当眼睛调节焦距时无法清晰显示虚拟图像,研究一种新型的透射型头盔显示技术,其特点是既可以看到外部景物,也可以同时看到微型显示芯片上所显示的虚拟图像,并且虚拟图像独立于人眼的调节.介绍显示技术的原理,用光学设计软件Zemax完成整体光学系统设计,优化后系统达到衍射极限,滤波投影系统中MTF在60 lp/mm处达到了0.7;用Autocad软件设计了头盔显示器结构.光路成像实验结果表明:设计的系统可以看到外界图像和虚拟图像,当眼睛对外界景物聚焦时,外界景物与虚拟图像都保持清晰,眼睛对外界景物离焦时,外界景物变得模糊而虚拟图像仍然保持清晰.%Conventional see-through head-mounted displays can display virtual image; however, virtual image can not keep clear on the retina from beginning to end when the eyes adjusts the focus length. So a new technology of see-through head-mounted displays which can catch sight of external features and see the virtual image on the micro-display chip has been introduced. It is an advantage of virtual image, which can always keep clear, no matter how to adiust the focus of eyes. The principle of display technology was discussed. The overall optical system was designed by using Zemax and reached the diffraction limit after optimization. The MTF of filtering projection system was 0. 7 at 60 lp/mm. The structure of head-mounted displays was designed by Autocad. The imaging experiment result shows that both the external features and the virtual image could be seen clearly in the presence of eyes focusing on external features. Though the external features would be fuzzy when eyes defocus on external features, the virtual image could still be seen as clearly as before.

  9. A method for generating enhanced vision displays using OpenGL video texture

    Science.gov (United States)

    Bernier, Kenneth L.

    2010-04-01

    Degraded visual conditions can marvel the curious and destroy the unprepared. While navigation instruments are trustworthy companions, true visual reference remains king of the hills. Poor visibility may be overcome via imaging sensors such as low light level charge-coupled-device, infrared, and millimeter wave radar. Enhanced Vision systems combine this imagery into a comprehensive situation awareness display, presented to the pilot as reference imagery on a cockpit display, or as world-conformal imagery on head-up or head-mounted displays. This paper demonstrates that Enhanced Vision imaging can be achieved at video rates using typical CPU / GPU architecture, standard video capture hardware, dynamic non-linear ray tracing algorithms, efficient image transfer methods, and simple OpenGL rendering techniques.

  10. A universal and smart helmet-mounted display of large FOV

    Science.gov (United States)

    Zhang, Nan; Weng, Dongdong; Wang, Yongtian; Li, Xuan; Liu, Youhai

    2011-11-01

    HMD (head-mounted display) is an important virtual reality device, which has played a vital role in VR application system. Compared with traditional HMD which cannot be applied in the daily life owing to their disadvantage on the price and performance, a new universal and smart Helmet-Mounted Display of large FOV uses excellent performance and widespread popularity as its starting point. By adopting simplified visual system and transflective system that combines the transmission-type and reflection-type display system with transflective glass based on the Huggens-Fresnel principle, we have designed a HMD with wide field of view, which can be easy to promote and popularize. Its resolution is 800*600, and field of view is 36.87°(vertical)* 47.92°(horizontal). Its weight is only 1080g. It has caught up with the advanced world levels.

  11. Decrease in monocular sleep after sleep deprivation in the domestic chicken

    NARCIS (Netherlands)

    Boerema, AS; Riedstra, B; Strijkstra, AM

    2003-01-01

    We investigated the trade-off between sleep need and alertness, by challenging chickens to modify their monocular sleep. We sleep deprived domestic chickens (Gallus domesticus) to increase their sleep need. We found that in response to sleep deprivation the fraction of monocular sleep within sleep

  12. Decrease in monocular sleep after sleep deprivation in the domestic chicken

    NARCIS (Netherlands)

    Boerema, AS; Riedstra, B; Strijkstra, AM

    2003-01-01

    We investigated the trade-off between sleep need and alertness, by challenging chickens to modify their monocular sleep. We sleep deprived domestic chickens (Gallus domesticus) to increase their sleep need. We found that in response to sleep deprivation the fraction of monocular sleep within sleep d

  13. Deformable Surface 3D Reconstruction from Monocular Images

    CERN Document Server

    Salzmann, Matthieu

    2010-01-01

    Being able to recover the shape of 3D deformable surfaces from a single video stream would make it possible to field reconstruction systems that run on widely available hardware without requiring specialized devices. However, because many different 3D shapes can have virtually the same projection, such monocular shape recovery is inherently ambiguous. In this survey, we will review the two main classes of techniques that have proved most effective so far: The template-based methods that rely on establishing correspondences with a reference image in which the shape is already known, and non-rig

  14. Automatic gear sorting system based on monocular vision

    Directory of Open Access Journals (Sweden)

    Wenqi Wu

    2015-11-01

    Full Text Available An automatic gear sorting system based on monocular vision is proposed in this paper. A CCD camera fixed on the top of the sorting system is used to obtain the images of the gears on the conveyor belt. The gears׳ features including number of holes, number of teeth and color are extracted, which is used to categorize the gears. Photoelectric sensors are used to locate the gears׳ position and produce the trigger signals for pneumatic cylinders. The automatic gear sorting is achieved by using pneumatic actuators to push different gears into their corresponding storage boxes. The experimental results verify the validity and reliability of the proposed method and system.

  15. Monocular occlusions determine the perceived shape and depth of occluding surfaces.

    Science.gov (United States)

    Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S

    2010-06-01

    Recent experiments have established that monocular areas arising due to occlusion of one object by another contribute to stereoscopic depth perception. It has been suggested that the primary role of monocular occlusions is to define depth discontinuities and object boundaries in depth. Here we use a carefully designed stimulus to demonstrate empirically that monocular occlusions play an important role in localizing depth edges and defining the shape of the occluding surfaces in depth. We show that the depth perceived via occlusion in our stimuli is not due to the presence of binocular disparity at the boundary and discuss the quantitative nature of depth perception in our stimuli. Our data suggest that the visual system can use monocular information to estimate not only the sign of the depth of the occluding surface but also its magnitude. We also provide preliminary evidence that perceived depth of illusory occluders derived from monocular information can be biased by binocular features.

  16. Monocular camera and IMU integration for indoor position estimation.

    Science.gov (United States)

    Zhang, Yinlong; Tan, Jindong; Zeng, Ziming; Liang, Wei; Xia, Ye

    2014-01-01

    This paper presents a monocular camera (MC) and inertial measurement unit (IMU) integrated approach for indoor position estimation. Unlike the traditional estimation methods, we fix the monocular camera downward to the floor and collect successive frames where textures are orderly distributed and feature points robustly detected, rather than using forward oriented camera in sampling unknown and disordered scenes with pre-determined frame rate and auto-focus metric scale. Meanwhile, camera adopts the constant metric scale and adaptive frame rate determined by IMU data. Furthermore, the corresponding distinctive image feature point matching approaches are employed for visual localizing, i.e., optical flow for fast motion mode; Canny Edge Detector & Harris Feature Point Detector & Sift Descriptor for slow motion mode. For superfast motion and abrupt rotation where images from camera are blurred and unusable, the Extended Kalman Filter is exploited to estimate IMU outputs and to derive the corresponding trajectory. Experimental results validate that our proposed method is effective and accurate in indoor positioning. Since our system is computationally efficient and in compact size, it's well suited for visually impaired people indoor navigation and wheelchaired people indoor localization.

  17. Surface formation and depth in monocular scene perception.

    Science.gov (United States)

    Albert, M K

    1999-01-01

    The visual perception of monocular stimuli perceived as 3-D objects has received considerable attention from researchers in human and machine vision. However, most previous research has focused on how individual 3-D objects are perceived. Here this is extended to a study of how the structure of 3-D scenes containing multiple, possibly disconnected objects and features is perceived. Da Vinci stereopsis, stereo capture, and other surface formation and interpolation phenomena in stereopsis and structure-from-motion suggest that small features having ambiguous depth may be assigned depth by interpolation with features having unambiguous depth. I investigated whether vision may use similar mechanisms to assign relative depth to multiple objects and features in sparse monocular images, such as line drawings, especially when other depth cues are absent. I propose that vision tends to organize disconnected objects and features into common surfaces to construct 3-D-scene interpretations. Interpolations that are too weak to generate a visible surface percept may still be strong enough to assign relative depth to objects within a scene. When there exists more than one possible surface interpolation in a scene, the visual system's preference for one interpolation over another seems to be influenced by a number of factors, including: (i) proximity, (ii) smoothness, (iii) a preference for roughly frontoparallel surfaces and 'ground' surfaces, (iv) attention and fixation, and (v) higher-level factors. I present a variety of demonstrations and an experiment to support this surface-formation hypothesis.

  18. A Novel Metric Online Monocular SLAM Approach for Indoor Applications

    Directory of Open Access Journals (Sweden)

    Yongfei Li

    2016-01-01

    Full Text Available Monocular SLAM has attracted more attention recently due to its flexibility and being economic. In this paper, a novel metric online direct monocular SLAM approach is proposed, which can obtain the metric reconstruction of the scene. In the proposed approach, a chessboard is utilized to provide initial depth map and scale correction information during the SLAM process. The involved chessboard provides the absolute scale of scene, and it is seen as a bridge between the camera visual coordinate and the world coordinate. The scene is reconstructed as a series of key frames with their poses and correlative semidense depth maps, using a highly accurate pose estimation achieved by direct grid point-based alignment. The estimated pose is coupled with depth map estimation calculated by filtering over a large number of pixelwise small-baseline stereo comparisons. In addition, this paper formulates the scale-drift model among key frames and the calibration chessboard is used to correct the accumulated pose error. At the end of this paper, several indoor experiments are conducted. The results suggest that the proposed approach is able to achieve higher reconstruction accuracy when compared with the traditional LSD-SLAM approach. And the approach can also run in real time on a commonly used computer.

  19. Human Pose Estimation from Monocular Images: A Comprehensive Survey

    Directory of Open Access Journals (Sweden)

    Wenjuan Gong

    2016-11-01

    Full Text Available Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing. Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.

  20. Saccade amplitude disconjugacy induced by aniseikonia: role of monocular depth cues.

    Science.gov (United States)

    Pia Bucci, M; Kapoula, Z; Eggert, T

    1999-09-01

    The conjugacy of saccades is rapidly modified if the images are made unequal for the two eyes. Disconjugacy persists even in the absence of disparity which indicates learning. Binocular visual disparity is a major cue to depth and is believed to drive the disconjugacy of saccades to aniseikonic images. The goal of the present study was to test whether monocular depth cues can also influence the disconjugacy of saccades. Three experiments were performed in which subjects were exposed for 15-20 min to a 10% image size inequality. Three different images were used: a grid that contained a single monocular depth cue strongly indicating a frontoparallel plane; a random-dot pattern that contained a less prominent monocular depth cue (absence of texture gradient) which also indicates the frontoparallel plane; and a complex image with several overlapping geometric forms that contained a variety of monocular depth cues. Saccades became disconjugate in all three experiments. The disconjugacy was larger and more persistent for the experiment using the random-dot pattern that had the least prominent monocular depth cues. The complex image which had a large variety of monocular depth cues produced the most variable and less persistent disconjugacy. We conclude that the monocular depth cues modulate the disconjugacy of saccades stimulated by the disparity of aniseikonic images.

  1. Restocking the optical designers' toolbox for next-generation wearable displays (Presentation Recording)

    Science.gov (United States)

    Kress, Bernard C.

    2015-09-01

    Three years ago, industry and consumers learned that there was more to Head Mounted Displays (HMDs) than the long-lasting but steady market for defense or the market for gadget video player headsets: the first versions of Smart Glasses were introduced to the public. Since then, most major consumer electronics companies unveiled their own versions of Connected Glasses, Smart Glasses or Smart Eyewear, AR (Augmented Reality) and VR (Virtual Reality) headsets. This rush resulted in the build-up of a formidable zoo of optical technologies, each claiming to be best suited for the task on hand. Today, the question is not so much anymore "will the Smart Glass market happen?" but rather "which optical technologies will be best fitted for the various declinations of the existing wearable display market," one of the main declination being the Smart Glasses market.

  2. Projection-type see-through holographic three-dimensional display

    Science.gov (United States)

    Wakunami, Koki; Hsieh, Po-Yuan; Oi, Ryutaro; Senoh, Takanori; Sasaki, Hisayuki; Ichihashi, Yasuyuki; Okui, Makoto; Huang, Yi-Pai; Yamamoto, Kenji

    2016-10-01

    Owing to the limited spatio-temporal resolution of display devices, dynamic holographic three-dimensional displays suffer from a critical trade-off between the display size and the visual angle. Here we show a projection-type holographic three-dimensional display, in which a digitally designed holographic optical element and a digital holographic projection technique are combined to increase both factors at the same time. In the experiment, the enlarged holographic image, which is twice as large as the original display device, projected on the screen of the digitally designed holographic optical element was concentrated at the target observation area so as to increase the visual angle, which is six times as large as that for a general holographic display. Because the display size and the visual angle can be designed independently, the proposed system will accelerate the adoption of holographic three-dimensional displays in industrial applications, such as digital signage, in-car head-up displays, smart-glasses and head-mounted displays.

  3. Stereoscopic 3D-scene synthesis from a monocular camera with an electrically tunable lens

    Science.gov (United States)

    Alonso, Julia R.

    2016-09-01

    3D-scene acquisition and representation is important in many areas ranging from medical imaging to visual entertainment application. In this regard, optical imaging acquisition combined with post-capture processing algorithms enable the synthesis of images with novel viewpoints of a scene. This work presents a new method to reconstruct a pair of stereoscopic images of a 3D-scene from a multi-focus image stack. A conventional monocular camera combined with an electrically tunable lens (ETL) is used for image acquisition. The captured visual information is reorganized considering a piecewise-planar image formation model with a depth-variant point spread function (PSF) along with the known focusing distances at which the images of the stack were acquired. The consideration of a depth-variant PSF allows the application of the method to strongly defocused multi-focus image stacks. Finally, post-capture perspective shifts, presenting each eye the corresponding viewpoint according to the disparity, are generated by simulating the displacement of a synthetic pinhole camera. The procedure is performed without estimation of the depth-map or segmentation of the in-focus regions. Experimental results for both real and synthetic data images are provided and presented as anaglyphs, but it could easily be implemented in 3D displays based in parallax barrier or polarized light.

  4. Novel approach for mobile robot localization using monocular vision

    Science.gov (United States)

    Zhong, Zhiguang; Yi, Jianqiang; Zhao, Dongbin; Hong, Yiping

    2003-09-01

    This paper presents a novel approach for mobile robot localization using monocular vision. The proposed approach locates a robot relative to the target to which the robot moves. Two points are selected from the target as two feature points. Once the coordinates in an image of the two feature points are detected, the position and motion direction of the robot can be determined according to the detected coordinates. Unlike those reported geometry pose estimation or landmarks matching methods, this approach requires neither artificial landmarks nor an accurate map of indoor environment. It needs less computation and can simplify greatly the localization problem. The validity and flexibility of the proposed approach is demonstrated by experiments performed on real images. The results show that this new approach is not only simple and flexible but also has high localization precision.

  5. A low cost PSD-based monocular motion capture system

    Science.gov (United States)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  6. Markerless monocular tracking system for guided external eye surgery.

    Science.gov (United States)

    Monserrat, C; Rupérez, M J; Alcañiz, M; Mataix, J

    2014-12-01

    This paper presents a novel markerless monocular tracking system aimed at guiding ophthalmologists during external eye surgery. This new tracking system performs a very accurate tracking of the eye by detecting invariant points using only textures that are present in the sclera, i.e., without using traditional features like the pupil and/or cornea reflections, which remain partially or totally occluded in most surgeries. Two known algorithms that compute invariant points and correspondences between pairs of images were implemented in our system: Scalable Invariant Feature Transforms (SIFT) and Speed Up Robust Features (SURF). The results of experiments performed on phantom eyes show that, with either algorithm, the developed system tracks a sphere at a 360° rotation angle with an error that is lower than 0.5%. Some experiments have also been carried out on images of real eyes showing promising behavior of the system in the presence of blood or surgical instruments during real eye surgery.

  7. Monocular vision based navigation method of mobile robot

    Institute of Scientific and Technical Information of China (English)

    DONG Ji-wen; YANG Sen; LU Shou-yin

    2009-01-01

    A trajectory tracking method is presented for the visual navigation of the monocular mobile robot. The robot move along line trajectory drawn beforehand, recognized and stop on the stop-sign to finish special task. The robot uses a forward looking colorful digital camera to capture information in front of the robot, and by the use of HSI model partition the trajectory and the stop-sign out. Then the "sampling estimate" method was used to calculate the navigation parameters. The stop-sign is easily recognized and can identify 256 different signs. Tests indicate that the method can fit large-scale intensity of brightness and has more robustness and better real-time character.

  8. Monocular Obstacle Detection for Real-World Environments

    Science.gov (United States)

    Einhorn, Erik; Schroeter, Christof; Gross, Horst-Michael

    In this paper, we present a feature based approach for monocular scene reconstruction based on extended Kaiman filters (EKF). Our method processes a sequence of images taken by a single camera mounted in front of a mobile robot. Using various techniques we are able to produce a precise reconstruction that is almost free from outliers and therefore can be used for reliable obstacle detection and avoidance. In real-world field tests we show that the presented approach is able to detect obstacles that can not be seen by other sensors, such as laser range finders. Furthermore, we show that visual obstacle detection combined with a laser range finder can increase the detection rate of obstacles considerably, allowing the autonomous use of mobile robots in complex public and home environments.

  9. Monocular Visual Deprivation Suppresses Excitability in Adult Human Visual Cortex

    DEFF Research Database (Denmark)

    Lou, Astrid Rosenstand; Madsen, Kristoffer Hougaard; Paulson, Olaf Bjarne

    2011-01-01

    The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... employed TMS to trace plastic changes in adult visual cortex before, during, and after 48 h of monocular deprivation (MD) of the right dominant eye. In healthy adult volunteers, MD-induced changes in visual cortex excitability were probed with paired-pulse TMS applied to the left and right occipital cortex....... Stimulus–response curves were constructed by recording the intensity of the reported phosphenes evoked in the contralateral visual field at range of TMS intensities. Phosphene measurements revealed that MD produced a rapid and robust decrease in cortical excitability relative to a control condition without...

  10. Monocular 3D scene reconstruction at absolute scale

    Science.gov (United States)

    Wöhler, Christian; d'Angelo, Pablo; Krüger, Lars; Kuhl, Annika; Groß, Horst-Michael

    In this article we propose a method for combining geometric and real-aperture methods for monocular three-dimensional (3D) reconstruction of static scenes at absolute scale. Our algorithm relies on a sequence of images of the object acquired by a monocular camera of fixed focal setting from different viewpoints. Object features are tracked over a range of distances from the camera with a small depth of field, leading to a varying degree of defocus for each feature. Information on absolute depth is obtained based on a Depth-from-Defocus approach. The parameters of the point spread functions estimated by Depth-from-Defocus are used as a regularisation term for Structure-from-Motion. The reprojection error obtained from bundle adjustment and the absolute depth error obtained from Depth-from-Defocus are simultaneously minimised for all tracked object features. The proposed method yields absolutely scaled 3D coordinates of the scene points without any prior knowledge about scene structure and camera motion. We describe the implementation of the proposed method both as an offline and as an online algorithm. Evaluating the algorithm on real-world data, we demonstrate that it yields typical relative scale errors of a few per cent. We examine the influence of random effects, i.e. the noise of the pixel grey values, and systematic effects, caused by thermal expansion of the optical system or by inclusion of strongly blurred images, on the accuracy of the 3D reconstruction result. Possible applications of our approach are in the field of industrial quality inspection; in particular, it is preferable to stereo cameras in industrial vision systems with space limitations or where strong vibrations occur.

  11. Military display market segment: wearable and portable

    Science.gov (United States)

    Desjardins, Daniel D.; Hopper, Darrel G.

    2003-09-01

    The military display market (MDM) is analyzed in terms of one of its segments, wearable and portable displays. Wearable and portable displays are those embedded in gear worn or carried by warfighters. Categories include hand-mobile (direct-view and monocular/binocular), palm-held, head/helmet-mounted, body-strapped, knee-attached, lap-born, neck-lanyard, and pocket/backpack-stowed. Some 62 fielded and developmental display sizes are identified in this wearable/portable MDM segment. Parameters requiring special consideration, such as weight, luminance ranges, light emission, viewing angles, and chromaticity coordinates, are summarized and compared. Ruggedized commercial versus commercial off-the-shelf designs are contrasted; and a number of custom displays are also found in this MDM category. Display sizes having aggregate quantities of 5,000 units or greater or having 2 or more program applications are identified. Wearable and portable displays are also analyzed by technology (LCD, LED, CRT, OLED and plasma). The technical specifications and program history of several high-profile military programs are discussed to provide a systems context for some representative displays and their function. As of August 2002 our defense-wide military display market study has documented 438,882 total display units distributed across 1,163 display sizes and 438 weapon systems. Wearable and portable displays account for 202,593 displays (46% of total DoD) yet comprise just 62 sizes (5% of total DoD) in 120 weapons systems (27% of total DoD). Some 66% of these wearable and portable applications involve low information content displays comprising just a few characters in one color; however, there is an accelerating trend towards higher information content units capable of showing changeable graphics, color and video.

  12. Spatial constraints of stereopsis in video displays

    Science.gov (United States)

    Schor, Clifton

    1989-01-01

    Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.

  13. Effect of field of view and monocular viewing on angular size judgements in an outdoor scene

    Science.gov (United States)

    Denz, E. A.; Palmer, E. A.; Ellis, S. R.

    1980-01-01

    Observers typically overestimate the angular size of distant objects. Significantly, overestimations are greater in outdoor settings than in aircraft visual-scene simulators. The effect of field of view and monocular and binocular viewing conditions on angular size estimation in an outdoor field was examined. Subjects adjusted the size of a variable triangle to match the angular size of a standard triangle set at three greater distances. Goggles were used to vary the field of view from 11.5 deg to 90 deg for both monocular and binocular viewing. In addition, an unrestricted monocular and binocular viewing condition was used. It is concluded that neither restricted fields of view similar to those present in visual simulators nor the restriction of monocular viewing causes a significant loss in depth perception in outdoor settings. Thus, neither factor should significantly affect the depth realism of visual simulators.

  14. Reactivation of thalamocortical plasticity by dark exposure during recovery from chronic monocular deprivation

    Science.gov (United States)

    Montey, Karen L.; Quinlan, Elizabeth M.

    2015-01-01

    Chronic monocular deprivation induces severe amblyopia that is resistant to spontaneous reversal in adulthood. However, dark exposure initiated in adulthood reactivates synaptic plasticity in the visual cortex and promotes recovery from chronic monocular deprivation. Here we show that chronic monocular deprivation significantly decreases the strength of feedforward excitation and significantly decreases the density of dendritic spines throughout the deprived binocular visual cortex. Dark exposure followed by reverse deprivation significantly enhances the strength of thalamocortical synaptic transmission and the density of dendritic spines on principle neurons throughout the depth of the visual cortex. Thus dark exposure reactivates widespread synaptic plasticity in the adult visual cortex, including at thalamocortical synapses, during the recovery from chronic monocular deprivation. PMID:21587234

  15. Dynamic object recognition and tracking of mobile robot by monocular vision

    Science.gov (United States)

    Liu, Lei; Wang, Yongji

    2007-11-01

    Monocular Vision is widely used in mobile robot's motion control for its simple structure and easy using. An integrated description to distinguish and tracking the specified color targets dynamically and precisely by the Monocular Vision based on the imaging principle is the major topic of the paper. The mainline is accordance with the mechanisms of visual processing strictly, including the pretreatment and recognition processes. Specially, the color models are utilized to decrease the influence of the illumination in the paper. Some applied algorithms based on the practical application are used for image segmentation and clustering. After recognizing the target, however the monocular camera can't get depth information directly, 3D Reconstruction Principle is used to calculate the distance and direction from robot to target. To emend monocular camera reading, the laser is used after vision measuring. At last, a vision servo system is designed to realize the robot's dynamic tracking to the moving target.

  16. Apparent motion of monocular stimuli in different depth planes with lateral head movements.

    Science.gov (United States)

    Shimono, K; Tam, W J; Ono, H

    2007-04-01

    A stationary monocular stimulus appears to move concomitantly with lateral head movements when it is embedded in a stereogram representing two front-facing rectangular areas, one above the other at two different distances. In Experiment 1, we found that the extent of perceived motion of the monocular stimulus covaried with the amplitude of head movement and the disparity between the two rectangular areas (composed of random dots). In Experiment 2, we found that the extent of perceived motion of the monocular stimulus was reduced compared to that in Experiment 1 when the rectangular areas were defined only by an outline rather than by random dots. These results are discussed using the hypothesis that a monocular stimulus takes on features of the binocular surface area in which it is embedded and is perceived as though it were treated as a binocular stimulus with regards to its visual direction and visual depth.

  17. The effect of monocular depth cues on the detection of moving objects by moving observers

    National Research Council Canada - National Science Library

    Royden, Constance S; Parsons, Daniel; Travatello, Joshua

    2016-01-01

    ... and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects...

  18. The role of monocularly visible regions in depth and surface perception.

    Science.gov (United States)

    Harris, Julie M; Wilcox, Laurie M

    2009-11-01

    The mainstream of binocular vision research has long been focused on understanding how binocular disparity is used for depth perception. In recent years, researchers have begun to explore how monocular regions in binocularly viewed scenes contribute to our perception of the three-dimensional world. Here we review the field as it currently stands, with a focus on understanding the extent to which the role of monocular regions in depth perception can be understood using extant theories of binocular vision.

  19. Head Worn Display System for Equivalent Visual Operations

    Science.gov (United States)

    Cupero, Frank; Valimont, Brian; Wise, John; Best. Carl; DeMers, Bob

    2009-01-01

    Head-Worn Displays or so-called, near-to-eye displays have potentially significant advantages in terms of cost, overcoming cockpit space constraints, and for the display of spatially-integrated information. However, many technical issues need to be overcome before these technologies can be successfully introduced into commercial aircraft cockpits. The results of three activities are reported. First, the near-to-eye display design, technological, and human factors issues are described and a literature review is presented. Second, the results of a fixed-base piloted simulation, investigating the impact of near to eye displays on both operational and visual performance is reported. Straight-in approaches were flown in simulated visual and instrument conditions while using either a biocular or a monocular display placed on either the dominant or non-dominant eye. The pilot's flight performance, visual acuity, and ability to detect unsafe conditions on the runway were tested. The data generally supports a monocular design with minimal impact due to eye dominance. Finally, a method for head tracker system latency measurement is developed and used to compare two different devices.

  20. Comparison of Subjective Refraction under Binocular and Monocular Conditions in Myopic Subjects.

    Science.gov (United States)

    Kobashi, Hidenaga; Kamiya, Kazutaka; Handa, Tomoya; Ando, Wakako; Kawamorita, Takushi; Igarashi, Akihito; Shimizu, Kimiya

    2015-07-28

    To compare subjective refraction under binocular and monocular conditions, and to investigate the clinical factors affecting the difference in spherical refraction between the two conditions. We examined thirty eyes of 30 healthy subjects. Binocular and monocular refraction without cycloplegia was measured through circular polarizing lenses in both eyes, using the Landolt-C chart of the 3D visual function trainer-ORTe. Stepwise multiple regression analysis was used to assess the relations among several pairs of variables and the difference in spherical refraction in binocular and monocular conditions. Subjective spherical refraction in the monocular condition was significantly more myopic than that in the binocular condition (p refraction (p = 0.99). The explanatory variable relevant to the difference in spherical refraction between binocular and monocular conditions was the binocular spherical refraction (p = 0.032, partial regression coefficient B = 0.029) (adjusted R(2) = 0.230). No significant correlation was seen with other clinical factors. Subjective spherical refraction in the monocular condition was significantly more myopic than that in the binocular condition. Eyes with higher degrees of myopia are more predisposed to show the large difference in spherical refraction between these two conditions.

  1. A Case of Functional (Psychogenic Monocular Hemianopia Analyzed by Measurement of Hemifield Visual Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Tsuyoshi Yoneda

    2013-12-01

    Full Text Available Purpose: Functional monocular hemianopia is an extremely rare condition, for which measurement of hemifield visual evoked potentials (VEPs has not been previously described. Methods: A 14-year-old boy with functional monocular hemianopia was followed up with Goldmann perimetry and measurement of hemifield and full-field VEPs. Results: The patient had a history of monocular temporal hemianopia of the right eye following headache, nausea and ague. There was no relative afferent pupillary defect, and a color perception test was normal. Goldmann perimetry revealed a vertical monocular temporal hemianopia of the right eye; the hemianopia on the right was also detected with a binocular visual field test. Computed tomography, magnetic resonance imaging (MRI and MR angiography of the brain including the optic chiasm as well as orbital MRI revealed no abnormalities. On the basis of these results, we diagnosed the patient's condition as functional monocular hemianopia. Pattern VEPs according to the International Society for Clinical Electrophysiology of Vision (ISCEV standard were within the normal range. The hemifield pattern VEPs for the right eye showed a symmetrical latency and amplitude for nasal and temporal hemifield stimulation. One month later, the visual field defect of the patient spontaneously disappeared. Conclusions: The latency and amplitude of hemifield VEPs for a patient with functional monocular hemianopia were normal. Measurement of hemifield VEPs may thus provide an objective tool for distinguishing functional hemianopia from hemifield loss caused by an organic lesion.

  2. Projection displays

    Science.gov (United States)

    Chiu, George L.; Yang, Kei H.

    1998-08-01

    Projection display in today's market is dominated by cathode ray tubes (CRTs). Further progress in this mature CRT projector technology will be slow and evolutionary. Liquid crystal based projection displays have gained rapid acceptance in the business market. New technologies are being developed on several fronts: (1) active matrix built from polysilicon or single crystal silicon; (2) electro- optic materials using ferroelectric liquid crystal, polymer dispersed liquid crystals or other liquid crystal modes, (3) micromechanical-based transducers such as digital micromirror devices, and grating light valves, (4) high resolution displays to SXGA and beyond, and (5) high brightness. This article reviews the projection displays from a transducer technology perspective along with a discussion of markets and trends.

  3. Content and context of monocular regions determine perceived depth in random dot, unpaired background and phantom stereograms.

    Science.gov (United States)

    Grove, Philip M; Gillam, Barbara; Ono, Hiroshi

    2002-07-01

    Perceived depth was measured for three-types of stereograms with the colour/texture of half-occluded (monocular) regions either similar to or dissimilar to that of binocular regions or background. In a two-panel random dot stereogram the monocular region was filled with texture either similar or different to the far panel or left blank. In unpaired background stereograms the monocular region either matched the background or was different in colour or texture and in phantom stereograms the monocular region matched the partially occluded object or was a different colour or texture. In all three cases depth was considerably impaired when the monocular texture did not match either the background or the more distant surface. The content and context of monocular regions as well as their position are important in determining their role as occlusion cues and thus in three-dimensional layout. We compare coincidence and accidental view accounts of these effects.

  4. Development of a monocular vision system for robotic drilling

    Institute of Scientific and Technical Information of China (English)

    Wei-dong ZHU; Biao MEI; Guo-rui YAN; Ying-lin KE

    2014-01-01

    Robotic drilling for aerospace structures demands a high positioning accuracy of the robot, which is usually achieved through error measurement and compensation. In this paper, we report the development of a practical monocular vision system for measurement of the relative error between the drill tool center point (TCP) and the reference hole. First, the principle of relative error measurement with the vision system is explained, followed by a detailed discussion on the hardware components, software components, and system integration. The elliptical contour extraction algorithm is presented for accurate and robust reference hole detection. System calibration is of key importance to the measurement accuracy of a vision system. A new method is proposed for the simultaneous calibration of camera internal parameters and hand-eye relationship with a dedicated calibration board. Extensive measurement experiments have been performed on a robotic drilling system. Experimental results show that the measurement accuracy of the developed vision system is higher than 0.15 mm, which meets the requirement of robotic drilling for aircraft structures.

  5. Deep monocular 3D reconstruction for assisted navigation in bronchoscopy.

    Science.gov (United States)

    Visentini-Scarzanella, Marco; Sugiura, Takamasa; Kaneko, Toshimitsu; Koto, Shinichiro

    2017-07-01

    In bronchoschopy, computer vision systems for navigation assistance are an attractive low-cost solution to guide the endoscopist to target peripheral lesions for biopsy and histological analysis. We propose a decoupled deep learning architecture that projects input frames onto the domain of CT renderings, thus allowing offline training from patient-specific CT data. A fully convolutional network architecture is implemented on GPU and tested on a phantom dataset involving 32 video sequences and [Formula: see text]60k frames with aligned ground truth and renderings, which is made available as the first public dataset for bronchoscopy navigation. An average estimated depth accuracy of 1.5 mm was obtained, outperforming conventional direct depth estimation from input frames by 60%, and with a computational time of [Formula: see text]30 ms on modern GPUs. Qualitatively, the estimated depth and renderings closely resemble the ground truth. The proposed method shows a novel architecture to perform real-time monocular depth estimation without losing patient specificity in bronchoscopy. Future work will include integration within SLAM systems and collection of in vivo datasets.

  6. Global localization from monocular SLAM on a mobile phone.

    Science.gov (United States)

    Ventura, Jonathan; Arth, Clemens; Reitmayr, Gerhard; Schmalstieg, Dieter

    2014-04-01

    We propose the combination of a keyframe-based monocular SLAM system and a global localization method. The SLAM system runs locally on a camera-equipped mobile client and provides continuous, relative 6DoF pose estimation as well as keyframe images with computed camera locations. As the local map expands, a server process localizes the keyframes with a pre-made, globally-registered map and returns the global registration correction to the mobile client. The localization result is updated each time a keyframe is added, and observations of global anchor points are added to the client-side bundle adjustment process to further refine the SLAM map registration and limit drift. The end result is a 6DoF tracking and mapping system which provides globally registered tracking in real-time on a mobile device, overcomes the difficulties of localization with a narrow field-of-view mobile phone camera, and is not limited to tracking only in areas covered by the offline reconstruction.

  7. Monocular visual scene understanding: understanding multi-object traffic scenes.

    Science.gov (United States)

    Wojek, Christian; Walk, Stefan; Roth, Stefan; Schindler, Konrad; Schiele, Bernt

    2013-04-01

    Following recent advances in detection, context modeling, and tracking, scene understanding has been the focus of renewed interest in computer vision research. This paper presents a novel probabilistic 3D scene model that integrates state-of-the-art multiclass object detection, object tracking and scene labeling together with geometric 3D reasoning. Our model is able to represent complex object interactions such as inter-object occlusion, physical exclusion between objects, and geometric context. Inference in this model allows us to jointly recover the 3D scene context and perform 3D multi-object tracking from a mobile observer, for objects of multiple categories, using only monocular video as input. Contrary to many other approaches, our system performs explicit occlusion reasoning and is therefore capable of tracking objects that are partially occluded for extended periods of time, or objects that have never been observed to their full extent. In addition, we show that a joint scene tracklet model for the evidence collected over multiple frames substantially improves performance. The approach is evaluated for different types of challenging onboard sequences. We first show a substantial improvement to the state of the art in 3D multipeople tracking. Moreover, a similar performance gain is achieved for multiclass 3D tracking of cars and trucks on a challenging dataset.

  8. 3D environment capture from monocular video and inertial data

    Science.gov (United States)

    Clark, R. Robert; Lin, Michael H.; Taylor, Colin J.

    2006-02-01

    This paper presents experimental methods and results for 3D environment reconstruction from monocular video augmented with inertial data. One application targets sparsely furnished room interiors, using high quality handheld video with a normal field of view, and linear accelerations and angular velocities from an attached inertial measurement unit. A second application targets natural terrain with manmade structures, using heavily compressed aerial video with a narrow field of view, and position and orientation data from the aircraft navigation system. In both applications, the translational and rotational offsets between the camera and inertial reference frames are initially unknown, and only a small fraction of the scene is visible in any one video frame. We start by estimating sparse structure and motion from 2D feature tracks using a Kalman filter and/or repeated, partial bundle adjustments requiring bounded time per video frame. The first application additionally incorporates a weak assumption of bounding perpendicular planes to minimize a tendency of the motion estimation to drift, while the second application requires tight integration of the navigational data to alleviate the poor conditioning caused by the narrow field of view. This is followed by dense structure recovery via graph-cut-based multi-view stereo, meshing, and optional mesh simplification. Finally, input images are texture-mapped onto the 3D surface for rendering. We show sample results from multiple, novel viewpoints.

  9. Mobile Robot Hierarchical Simultaneous Localization and Mapping Using Monocular Vision

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A hierarchical mobile robot simultaneous localization and mapping (SLAM) method that allows us to obtain accurate maps was presented. The local map level is composed of a set of local metric feature maps that are guaranteed to be statistically independent. The global level is a topological graph whose arcs are labeled with the relative location between local maps. An estimation of these relative locations is maintained with local map alignment algorithm, and more accurate estimation is calculated through a global minimization procedure using the loop closure constraint. The local map is built with Rao-Blackwellised particle filter (RBPF), where the particle filter is used to extending the path posterior by sampling new poses. The landmark position estimation and update is implemented through extended Kalman filter (EKF). Monocular vision mounted on the robot tracks the 3D natural point landmarks, which are structured with matching scale invariant feature transform (SIFT) feature pairs. The matching for multi-dimension SIFT features is implemented with a KD-tree in the time cost of O(lbN). Experiment results on Pioneer mobile robot in a real indoor environment show the superior performance of our proposed method.

  10. Surgical outcome in monocular elevation deficit: A retrospective interventional study

    Directory of Open Access Journals (Sweden)

    Bandyopadhyay Rakhi

    2008-01-01

    Full Text Available Background and Aim: Monocular elevation deficiency (MED is characterized by a unilateral defect in elevation, caused by paretic, restrictive or combined etiology. Treatment of this multifactorial entity is therefore varied. In this study, we performed different surgical procedures in patients of MED and evaluated their outcome, based on ocular alignment, improvement in elevation and binocular functions. Study Design: Retrospective interventional study. Materials and Methods: Twenty-eight patients were included in this study, from June 2003 to August 2006. Five patients underwent Knapp procedure, with or without horizontal squint surgery, 17 patients had inferior rectus recession, with or without horizontal squint surgery, three patients had combined inferior rectus recession and Knapp procedure and three patients had inferior rectus recession combined with contralateral superior rectus or inferior oblique surgery. The choice of procedure was based on the results of forced duction test (FDT. Results: Forced duction test was positive in 23 cases (82%. Twenty-four of 28 patients (86% were aligned to within 10 prism diopters. Elevation improved in 10 patients (36% from no elevation above primary position (-4 to only slight limitation of elevation (-1. Five patients had preoperative binocular vision and none gained it postoperatively. No significant postoperative complications or duction abnormalities were observed during the follow-up period. Conclusion: Management of MED depends upon selection of the correct surgical technique based on employing the results of FDT, for a satisfactory outcome.

  11. Motion parallax in immersive cylindrical display systems

    Science.gov (United States)

    Filliard, N.; Reymond, G.; Kemeny, A.; Berthoz, A.

    2012-03-01

    Motion parallax is a crucial visual cue produced by translations of the observer for the perception of depth and selfmotion. Therefore, tracking the observer viewpoint has become inevitable in immersive virtual (VR) reality systems (cylindrical screens, CAVE, head mounted displays) used e.g. in automotive industry (style reviews, architecture design, ergonomics studies) or in scientific studies of visual perception. The perception of a stable and rigid world requires that this visual cue be coherent with other extra-retinal (e.g. vestibular, kinesthetic) cues signaling ego-motion. Although world stability is never questioned in real world, rendering head coupled viewpoint in VR can lead to the perception of an illusory perception of unstable environments, unless a non-unity scale factor is applied on recorded head movements. Besides, cylindrical screens are usually used with static observers due to image distortions when rendering image for viewpoints different from a sweet spot. We developed a technique to compensate in real-time these non-linear visual distortions, in an industrial VR setup, based on a cylindrical screen projection system. Additionally, to evaluate the amount of discrepancies tolerated without perceptual distortions between visual and extraretinal cues, a "motion parallax gain" between the velocity of the observer's head and that of the virtual camera was introduced in this system. The influence of this artificial gain was measured on the gait stability of free-standing participants. Results indicate that, below unity, gains significantly alter postural control. Conversely, the influence of higher gains remains limited, suggesting a certain tolerance of observers to these conditions. Parallax gain amplification is therefore proposed as a possible solution to provide a wider exploration of space to users of immersive virtual reality systems.

  12. Enhanced monocular visual odometry integrated with laser distance meter for astronaut navigation.

    Science.gov (United States)

    Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin

    2014-03-11

    Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method.

  13. Dichoptic training in adults with amblyopia: Additional stereoacuity gains over monocular training.

    Science.gov (United States)

    Liu, Xiang-Yun; Zhang, Jun-Yun

    2017-08-04

    Dichoptic training is a recent focus of research on perceptual learning in adults with amblyopia, but whether and how dichoptic training is superior to traditional monocular training is unclear. Here we investigated whether dichoptic training could further boost visual acuity and stereoacuity in monocularly well-trained adult amblyopic participants. During dichoptic training the participants used the amblyopic eye to practice a contrast discrimination task, while a band-filtered noise masker was simultaneously presented in the non-amblyopic fellow eye. Dichoptic learning was indexed by the increase of maximal tolerable noise contrast for successful contrast discrimination in the amblyopic eye. The results showed that practice tripled maximal tolerable noise contrast in 13 monocularly well-trained amblyopic participants. Moreover, the training further improved stereoacuity by 27% beyond the 55% gain from previous monocular training, but unchanged visual acuity of the amblyopic eyes. Therefore our dichoptic training method may produce extra gains of stereoacuity, but not visual acuity, in adults with amblyopia after monocular training. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. MONOCULAR AND BINOCULAR VISION IN THE PERFORMANCE OF A COMPLEX SKILL

    Directory of Open Access Journals (Sweden)

    Thomas Heinen

    2011-09-01

    Full Text Available The goal of this study was to investigate the role of binocular and monocular vision in 16 gymnasts as they perform a handspring on vault. In particular we reasoned, if binocular visual information is eliminated while experts and apprentices perform a handspring on vault, and their performance level changes or is maintained, then such information must or must not be necessary for their best performance. If the elimination of binocular vision leads to differences in gaze behavior in either experts or apprentices, this would answer the question of an adaptive gaze behavior, and thus if this is a function of expertise level or not. Gaze behavior was measured using a portable and wireless eye-tracking system in combination with a movement-analysis system. Results revealed that gaze behavior differed between experts and apprentices in the binocular and monocular conditions. In particular, apprentices showed less fixations of longer duration in the monocular condition as compared to experts and the binocular condition. Apprentices showed longer blink duration than experts in both, the monocular and binocular conditions. Eliminating binocular vision led to a shorter repulsion phase and a longer second flight phase in apprentices. Experts exhibited no differences in phase durations between binocular and monocular conditions. Findings suggest, that experts may not rely on binocular vision when performing handsprings, and movement performance maybe influenced in apprentices when eliminating binocular vision. We conclude that knowledge about gaze-movement relationships may be beneficial for coaches when teaching the handspring on vault in gymnastics

  15. The precision of binocular and monocular depth judgments in natural settings.

    Science.gov (United States)

    McKee, Suzanne P; Taylor, Douglas G

    2010-08-01

    We measured binocular and monocular depth thresholds for objects presented in a real environment. Observers judged the depth separating a pair of metal rods presented either in relative isolation, or surrounded by other objects, including a textured surface. In the isolated setting, binocular thresholds were greatly superior to the monocular thresholds by as much as a factor of 18. The presence of adjacent objects and textures improved the monocular thresholds somewhat, but the superiority of binocular viewing remained substantial (roughly a factor of 10). To determine whether motion parallax would improve monocular sensitivity for the textured setting, we asked observers to move their heads laterally, so that the viewing eye was displaced by 8-10 cm; this motion produced little improvement in the monocular thresholds. We also compared disparity thresholds measured with the real rods to thresholds measured with virtual images in a standard mirror stereoscope. Surprisingly, for the two naive observers, the stereoscope thresholds were far worse than the thresholds for the real rods-a finding that indicates that stereoscope measurements for unpracticed observers should be treated with caution. With practice, the stereoscope thresholds for one observer improved to almost the precision of the thresholds for the real rods.

  16. Patterns of non-embolic transient monocular visual field loss.

    Science.gov (United States)

    Petzold, Axel; Islam, Niaz; Plant, G T

    2013-07-01

    The aim of this study was to systematically describe the semiology of non-embolic transient monocular visual field loss (neTMVL). We conducted a retrospective case note analysis of patients from Moorfields Eye Hospital (1995-2007). The variables analysed were age, age of onset, gender, past medical history or family history of migraine, eye affected, onset, duration and offset, perception (pattern, positive and negative symptoms), associated headache and autonomic symptoms, attack frequency, and treatment response to nifedipine. We identified 77 patients (28 male and 49 female). Mean age of onset was 37 years (range 14-77 years). The neTMVL was limited to the right eye in 36 % to the left in 47 % and occurred independently in either eye in 5 % of cases. A past medical history of migraine was present in 12 % and a family history in 8 %. Headache followed neTMVL in 14 % and was associated with autonomic features in 3 %. The neTMB was perceived as grey in 35 %, white in 21 %, black in 16 % and as phosphenes in 9 %. Most frequently neTMVL was patchy 20 %. Recovery of vision frequently resembled attack onset in reverse. In 3 patients without associated headache the loss of vision was permanent. Treatment with nifedipine was initiated in 13 patients with an attack frequency of more than one per week and reduced the attack frequency in all. In conclusion, this large series of patients with neTMVL permits classification into five types of reversible visual field loss (grey, white, black, phosphenes, patchy). Treatment response to nifidipine suggests some attacks to be caused by vasospasm.

  17. Assessing the binocular advantage in aided vision.

    Science.gov (United States)

    Harrington, Lawrence K; McIntire, John P; Hopper, Darrel G

    2014-09-01

    Advances in microsensors, microprocessors, and microdisplays are creating new opportunities for improving vision in degraded environments through the use of head-mounted displays. Initially, the cutting-edge technology used in these new displays will be expensive. Inevitably, the cost of providing the additional sensor and processing required to support binocularity brings the value of binocularity into question. Several assessments comparing binocular, binocular, and monocular head-mounted displays for aided vision have concluded that the additional performance, if any, provided by binocular head-mounted displays does not justify the cost. The selection of a biocular [corrected] display for use in the F-35 is a current example of this recurring decision process. It is possible that the human binocularity advantage does not carry over to the aided vision application, but more likely the experimental approaches used in the past have been too coarse to measure its subtle but important benefits. Evaluating the value of binocularity in aided vision applications requires an understanding of the characteristics of both human vision and head-mounted displays. With this understanding, the value of binocularity in aided vision can be estimated and experimental evidence can be collected to confirm or reject the presumed binocular advantage, enabling improved decisions in aided vision system design. This paper describes four computational approaches-geometry of stereopsis, modulation transfer function area for stereopsis, probability summation, and binocular summation-that may be useful in quantifying the advantage of binocularity in aided vision.

  18. The contribution of monocular depth cues to scene perception by pigeons.

    Science.gov (United States)

    Cavoto, Brian R; Cook, Robert G

    2006-07-01

    The contributions of different monocular depth cues to performance of a scene perception task were investigated in 4 pigeons. They discriminated the sequential depth ordering of three geometric objects in computer-rendered scenes. The orderings of these objects were specified by the combined presence or absence of the pictorial cues of relative density, occlusion, and relative size. In Phase 1, the pigeons learned the task as a direct function of the number of cues present. The three monocular cues contributed equally to the discrimination. Phase 2 established that differential shading on the objects provided an additional discriminative cue. These results suggest that the pigeon visual system is sensitive to many of the same monocular depth cues that are known to be used by humans. The theoretical implications for a comparative psychology of picture processing are considered.

  19. Refractive error and monocular viewing strengthen the hollow-face illusion.

    Science.gov (United States)

    Hill, Harold; Palmisano, Stephen; Matthews, Harold

    2012-01-01

    We measured the strength of the hollow-face illusion--the 'flipping distance' at which perception changes between convex and concave--as a function of a lens-induced 3 dioptre refractive error and monocular/binocular viewing. Refractive error and closing one eye both strengthened the illusion to approximately the same extent. The illusion was weakest viewed binocularly without refractive error and strongest viewed monocularly with it. This suggests binocular cues disambiguate the illusion at greater distances than monocular cues, but that both are disrupted by refractive error. We argue that refractive error leaves the ambiguous low-spatial-frequency shading information critical to the illusion largely unaffected while disrupting other, potentially disambiguating, depth/distance cues.

  20. A new combination of monocular and stereo cues for dense disparity estimation

    Science.gov (United States)

    Mao, Miao; Qin, Kaihuai

    2013-07-01

    Disparity estimation is a popular and important topic in computer vision and robotics. Stereo vision is commonly done to complete the task, but most existing methods fail in textureless regions and utilize numerical methods to interpolate into these regions. Monocular features are usually ignored, which may contain helpful depth information. We proposed a novel method combining monocular and stereo cues to compute dense disparities from a pair of images. The whole image regions are categorized into reliable regions (textured and unoccluded) and unreliable regions (textureless or occluded). Stable and accurate disparities can be gained at reliable regions. Then for unreliable regions, we utilize k-means to find the most similar reliable regions in terms of monocular cues. Our method is simple and effective. Experiments show that our method can generate a more accurate disparity map than existing methods from images with large textureless regions, e.g. snow, icebergs.

  1. Differential processing of binocular and monocular gloss cues in human visual cortex

    Science.gov (United States)

    Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.

    2016-01-01

    The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596

  2. Differential processing of binocular and monocular gloss cues in human visual cortex.

    Science.gov (United States)

    Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E

    2016-06-01

    The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.

  3. Eye movements in chameleons are not truly independent - evidence from simultaneous monocular tracking of two targets.

    Science.gov (United States)

    Katz, Hadas Ketter; Lustig, Avichai; Lev-Ari, Tidhar; Nov, Yuval; Rivlin, Ehud; Katzir, Gadi

    2015-07-01

    Chameleons perform large-amplitude eye movements that are frequently referred to as independent, or disconjugate. When prey (an insect) is detected, the chameleon's eyes converge to view it binocularly and 'lock' in their sockets so that subsequent visual tracking is by head movements. However, the extent of the eyes' independence is unclear. For example, can a chameleon visually track two small targets simultaneously and monocularly, i.e. one with each eye? This is of special interest because eye movements in ectotherms and birds are frequently independent, with optic nerves that are fully decussated and intertectal connections that are not as developed as in mammals. Here, we demonstrate that chameleons presented with two small targets moving in opposite directions can perform simultaneous, smooth, monocular, visual tracking. To our knowledge, this is the first demonstration of such a capacity. The fine patterns of the eye movements in monocular tracking were composed of alternating, longer, 'smooth' phases and abrupt 'step' events, similar to smooth pursuits and saccades. Monocular tracking differed significantly from binocular tracking with respect to both 'smooth' phases and 'step' events. We suggest that in chameleons, eye movements are not simply 'independent'. Rather, at the gross level, eye movements are (i) disconjugate during scanning, (ii) conjugate during binocular tracking and (iii) disconjugate, but coordinated, during monocular tracking. At the fine level, eye movements are disconjugate in all cases. These results support the view that in vertebrates, basic monocular control is under a higher level of regulation that dictates the eyes' level of coordination according to context. © 2015. Published by The Company of Biologists Ltd.

  4. Induction of Monocular Stereopsis by Altering Focus Distance: A Test of Ames's Hypothesis.

    Science.gov (United States)

    Vishwanath, Dhanraj

    2016-03-01

    Viewing a real three-dimensional scene or a stereoscopic image with both eyes generates a vivid phenomenal impression of depth known as stereopsis. Numerous reports have highlighted the fact that an impression of stereopsis can be induced in the absence of binocular disparity. A method claimed by Ames (1925) involved altering accommodative (focus) distance while monocularly viewing a picture. This claim was tested on naïve observers using a method inspired by the observations of Gogel and Ogle on the equidistance tendency. Consistent with Ames's claim, most observers reported that the focus manipulation induced an impression of stereopsis comparable to that obtained by monocular-aperture viewing.

  5. Elimination of aniseikonia in monocular aphakia with a contact lens-spectacle combination.

    Science.gov (United States)

    Schechter, R J

    1978-01-01

    Correction of monocular aphakia with contact lenses generally results in aniseikonia in the range of 7--9%; with correction by intraocular lenses, aniseikonia is approximately 2%. We present a new method of correcting aniseikonia in monocular aphakics using a contact lens-spectacle combination. A formula is derived wherein the contact lens is deliberately overcorrected; this overcorrection is then neutralized by the appropriate spectacle lens, to be worn over the contact lens. Calculated results with this system over a wide range of possible situations consistently results in an aniseikonia of 0.1%.

  6. END-TO-END DEPTH FROM MOTION WITH STABILIZED MONOCULAR VIDEOS

    Directory of Open Access Journals (Sweden)

    C. Pinard

    2017-08-01

    Full Text Available We propose a depth map inference system from monocular videos based on a novel dataset for navigation that mimics aerial footage from gimbal stabilized monocular camera in rigid scenes. Unlike most navigation datasets, the lack of rotation implies an easier structure from motion problem which can be leveraged for different kinds of tasks such as depth inference and obstacle avoidance. We also propose an architecture for end-to-end depth inference with a fully convolutional network. Results show that although tied to camera inner parameters, the problem is locally solvable and leads to good quality depth prediction.

  7. Stereopsis has the edge in 3-D displays

    Science.gov (United States)

    Piantanida, T. P.

    The results of studies conducted at SRI International to explore differences in image requirements for depth and form perception with 3-D displays are presented. Monocular and binocular stabilization of retinal images was used to separate form and depth perception and to eliminate the retinal disparity input to stereopsis. Results suggest that depth perception is dependent upon illumination edges in the retinal image that may be invisible to form perception, and that the perception of motion-in-depth may be inhibited by form perception, and may be influenced by subjective factors such as ocular dominance and learning.

  8. Stochastically optimized monocular vision-based navigation and guidance

    Science.gov (United States)

    Watanabe, Yoko

    The objective of this thesis is to design a relative navigation and guidance law for unmanned aerial vehicles, or UAVs, for vision-based control applications. The autonomous operation of UAVs has progressively developed in recent years. In particular, vision-based navigation, guidance and control has been one of the most focused on research topics for the automation of UAVs. This is because in nature, birds and insects use vision as the exclusive sensor for object detection and navigation. Furthermore, it is efficient to use a vision sensor since it is compact, light-weight and low cost. Therefore, this thesis studies the monocular vision-based navigation and guidance of UAVs. Since 2-D vision-based measurements are nonlinear with respect to the 3-D relative states, an extended Kalman filter (EKF) is applied in the navigation system design. The EKF-based navigation system is integrated with a real-time image processing algorithm and is tested in simulations and flight tests. The first closed-loop vision-based formation flight between two UAVs has been achieved, and the results are shown in this thesis to verify the estimation performance of the EKF. In addition, vision-based 3-D terrain recovery was performed in simulations to present a navigation design which has the capability of estimating states of multiple objects. In this problem, the statistical z-test is applied to solve the correspondence problem of relating measurements and estimation states. As a practical example of vision-based control applications for UAVs, a vision-based obstacle avoidance problem is specially addressed in this thesis. A navigation and guidance system is designed for a UAV to achieve a mission of waypoint tracking while avoiding unforeseen stationary obstacles by using vision information. An EKF is applied to estimate each obstacles' position from the vision-based information. A collision criteria is established by using a collision-cone approach and a time-to-go criterion. A minimum

  9. Perception of Acceleration in Motion-In-Depth With Only Monocular and Binocular Information

    Directory of Open Access Journals (Sweden)

    Santiago Estaún

    2003-01-01

    Full Text Available Percepción de la aceleración en el movimiento en profundidad con información monocular y con información monocular y binocular. En muchas ocasiones es necesario adecuar nuestras acciones a objetos que cambian su aceleración. Sin embargo, no se ha encontrado evidencia de una percepción directa de la aceleración. En su lugar, parece ser que somos capaces de detectar cambios de velocidad en el movimiento 2-D dentro de una ventana temporal. Además, resultados recientes sugieren que el movimiento en profundidad se detecta a través de cambios de posición. Por lo tanto, para detectar aceleración en profundidad sería necesario que el sistema visual lleve a cabo algun tipo de cómputo de segundo orden. En dos experimentos, mostramos que los observadores no perciben la aceleración en trayectorias de aproximación, al menos en los rangos que utilizados [600- 800 ms] dando como resultado una sobreestimación del tiempo de llegada. Independientemente de la condición de visibilidad (sólo monocular o monocular más binocular, la respuesta se ajusta a una estrategia de velocidad constante. No obstante, la sobreestimación se reduce cuando la información binocular está disponible.

  10. Binocular and Monocular Depth Cues in Online Feedback Control of 3-D Pointing Movement

    Science.gov (United States)

    Hu, Bo; Knill, David C.

    2012-01-01

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and thus were available in an observer’s retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account of the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions. PMID:21724567

  11. LASIK monocular en pacientes adultos con ambliopía por anisometropía

    Directory of Open Access Journals (Sweden)

    Alejandro Tamez-Peña

    2017-09-01

    Conclusiones: La cirugía refractiva monocular en pacientes con ambliopía por anisometropía es una opción terapéutica segura y efectiva que ofrece resultados visuales satisfactorios, preservando o incluso mejorando la AVMC preoperatoria.

  12. Binocular and monocular depth cues in online feedback control of 3D pointing movement.

    Science.gov (United States)

    Hu, Bo; Knill, David C

    2011-06-30

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and, thus, were available in an observer's retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size, and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions.

  13. Depth scaling in phantom and monocular gap stereograms using absolute distance information.

    Science.gov (United States)

    Kuroki, Daiichiro; Nakamizo, Sachio

    2006-11-01

    The present study aimed to investigate whether the visual system scales apparent depth from binocularly unmatched features by using absolute distance information. In Experiment 1 we examined the effect of convergence on perceived depth in phantom stereograms [Gillam, B., & Nakayama, K. (1999). Quantitative depth for a phantom surface can be based on cyclopean occlusion cues alone. Vision Research, 39, 109-112.], monocular gap stereograms [Pianta, M. J., & Gillam, B. J. (2003a). Monocular gap stereopsis: manipulation of the outer edge disparity and the shape of the gap. Vision Research, 43, 1937-1950.] and random dot stereograms. In Experiments 2 and 3 we examined the effective range of viewing distances for scaling the apparent depths in these stereograms. The results showed that: (a) the magnitudes of perceived depths increased in all stereograms as the estimate of the viewing distance increased while keeping proximal and/or distal sizes of the stimuli constant, and (b) the effective range of viewing distances was significantly shorter in monocular gap stereograms. The first result indicates that the visual system scales apparent depth from unmatched features as well as that from horizontal disparity, while the second suggests that, at far distances, the strength of the depth signal from an unmatched feature in monocular gap stereograms is relatively weaker than that from horizontal disparity.

  14. Full optical characterization of autostereoscopic 3D displays using local viewing angle and imaging measurements

    Science.gov (United States)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    Two commercial auto-stereoscopic 3D displays are characterized a using Fourier optics viewing angle system and an imaging video-luminance-meter. One display has a fixed emissive configuration and the other adapts its emission to the observer position using head tracking. For a fixed emissive condition, three viewing angle measurements are performed at three positions (center, right and left). Qualified monocular and binocular viewing spaces in front of the display are deduced as well as the best working distance. The imaging system is then positioned at this working distance and crosstalk homogeneity on the entire surface of the display is measured. We show that the crosstalk is generally not optimized on all the surface of the display. Display aspect simulation using viewing angle measurements allows understanding better the origin of those crosstalk variations. Local imperfections like scratches and marks generally increase drastically the crosstalk, demonstrating that cleanliness requirements for this type of display are quite critical.

  15. Universal Numeric Segmented Display

    CERN Document Server

    Azad, Md Abul kalam; Kamruzzaman, S M

    2010-01-01

    Segmentation display plays a vital role to display numerals. But in today's world matrix display is also used in displaying numerals. Because numerals has lots of curve edges which is better supported by matrix display. But as matrix display is costly and complex to implement and also needs more memory, segment display is generally used to display numerals. But as there is yet no proposed compact display architecture to display multiple language numerals at a time, this paper proposes uniform display architecture to display multiple language digits and general mathematical expressions with higher accuracy and simplicity by using a 18-segment display, which is an improvement over the 16 segment display.

  16. The Second Generation High Speed Rotor Head Mounted Instrumentation System

    Science.gov (United States)

    Lewis, John; Reynolds, R. S. (Technical Monitor)

    1997-01-01

    NASA Ames Research Center has been investigating the air pressure flow of a rotor blade on a UH-60 Black Hawk helicopter in-flight. This paper will address the changes and improvements due to additional restrictions and requirements for the instrumentation system. The second generation instrumentation system was substantially larger and this allowed greatly improved accessibility to the components for ease of maintenance as well as improved gain and offset adjustment capabilities and better filtering.

  17. Reflections of Head Mounted systems for Domotic Control

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Witzner Hansen, Dan

    2010-01-01

    In this report we would like to investigate the generalization of the concept of gaze interaction and investigate the possibility of using a gaze tracker for interaction not only with a single computer screen but also with multiple computer screens and possibly other environment objects in an int...

  18. Cataract surgery: emotional reactions of patients with monocular versus binocular vision Cirurgia de catarata: aspectos emocionais de pacientes com visão monocular versus binocular

    Directory of Open Access Journals (Sweden)

    Roberta Ferrari Marback

    2012-12-01

    Full Text Available PURPOSE: To analyze emotional reactions related to cataract surgery in two groups of patients (monocular vision - Group 1; binocular vision - Group 2. METHODS: A transversal comparative study was performed using a structured questionnaire from a previous exploratory study before cataract surgery. RESULTS: 206 patients were enrolled in the study, 96 individuals in Group 1 (69.3 ± 10.4 years and 110 in Group 2 (68.2 ± 10.2 years. Most patients in group 1 (40.6% and 22.7% of group 2, reported fear of surgery (pOBJETIVO: Verificar reações emocionais relacionadas à cirurgia de catarata entre pacientes com visão monocular (Grupo 1 e binocular (Grupo 2. MÉTODOS: Foi realizado um estudo tranversal, comparativo por meio de um questionário estruturado respondido por pacientes antes da cirurgia de catarata. RESULTADOS: A amostra foi composta de 96 pacientes no Grupo 1 (69.3 ± 10.4 anos e 110 no Grupo 2 (68.2 ± 10.2 anos. Consideravam apresentar medo da cirugia 40.6% do Grupo 1 e 22.7% do Grupo 2 (p<0.001 e entre as principais causas do medo, a possibilidade de perda da visão, complicações cirúrgicas e a morte durante o procedimento foram apontadas. Os sentimentos mais comuns entre os dois grupos foram dúvidas a cerca dos resultados da cirurgia e o nervosismo diante do procedimento. CONCLUSÃO: Pacientes com visão monocular apresentaram mais medo e dúvidas relacionadas à cirurgia de catarata comparados com aqueles com visão binocular. Portanto, é necessário que os médicos considerem estas reações emocionais e invistam mais tempo para esclarecer os riscos e benefícios da cirurgia de catarata.

  19. Cirurgia monocular para esotropias de grande ângulo: histórico e novos paradigmas Monocular surgery for large-angle esotropias: review and new paradigms

    Directory of Open Access Journals (Sweden)

    Edmilson Gigante

    2010-08-01

    Full Text Available As primitivas cirurgias de estrabismo, as miotomias e as tenotomias, eram feitas, simplesmente, seccionando-se o músculo ou o seu tendão, sem nenhuma sutura. Estas cirurgias eram feitas, geralmente, em um só olho, tanto em pequenos como em grandes desvios e os resultados eram pouco previsíveis. Jameson, em 1922, propôs uma nova técnica cirúrgica, usando suturas e fixando, na esclera, o músculo seccionado, tornando a cirurgia mais previsível. Para as esotropias, praticou recuos de, no máximo, 5 mm para o reto medial, o que se tornou uma regra para os demais cirurgiões que o sucederam, sendo impossível, a partir daí, a correção de esotropias de grande ângulo com cirurgia monocular. Rodriguez-Vásquez, em 1974, superou o parâmetro de 5 mm, propondo amplos recuos dos retos mediais (6 a 9 mm para o tratamento da síndrome de Ciancia, com bons resultados. Os autores revisaram a literatura, ano a ano, objetivando comparar os vários trabalhos e, com isso, concluíram que a cirurgia monocular de recuo-ressecção pode constituir uma opção viável para o tratamento cirúrgico das esotropias de grande ângulo.The primitive strabismus surgeries, myotomies and tenotomies, were performed simply by sectioning the muscle or its tendon without any suture. Such surgeries were usually performed in just one eye both in small and in large angles with not really predictable results. In 1922, Jameson introduced a new surgery technique using sutures and fixing the sectioned muscle to the sclera, increasing surgery predictability. For the esotropias he carried out no more than 5 mm recession of the medial rectus, which became a rule for the surgeons who followed him, which made it impossible from then on to correct largeangle esotropias with a monocular surgery. Rodriguez-Vásquez, in 1974, exceeded the 5 mm parameter by proposing large recessions of the medial recti (6 to 9 mm to treat the Ciancia syndrome with good results. The authors revised the

  20. Evaluación de la reproducibilidad de la retinoscopia dinámica monocular de Merchán

    Directory of Open Access Journals (Sweden)

    Lizbeth Acuña

    2010-08-01

    Full Text Available Objetivo: Evaluar la reproducibilidad de la retinoscopia dinámica monocular y su nivel de acuerdo con la retinoscopia estática binocular y monocular, retinoscopia de Nott y Método Estimado Monocular (MEM. Métodos: Se determinó la reproducibilidad entre los evaluadores y entre los métodos por medio del coeficiente de correlación intraclase (CCI y se establecieron los límites de acuerdo de Bland y Altman. Resultados: Se evaluaron 126 personas entre 5 y 39 años y se encontró una baja reproducibilidad interexaminador de la retinoscopia dinámica monocular en ambos ojos CCI ojo derecho: 0.49 (IC95% 0.36; 0.51; ojo izquierdo 0.51 (IC95% 0.38; 0.59. El límite de acuerdo entre evaluadores fue ±1.25 D. Al evaluar la reproducibilidad entre la retinoscopia dinámica monocular y la estática se observó que la mayor reproducibilidad se obtuvo con la estática binocular y monocular y, en visión próxima, entre el método estimado monocular y la retinoscopia de Nott. Conclusiones: La retinoscopia dinámica monocular no es una prueba reproducible y presenta diferencias clínicas significativas para determinar el estado refractivo, en cuanto a poder dióptrico y tipo de ametropía, por tanto, no se puede considerar dentro de la batería de exámenes aplicados para determinar diagnósticos y correcciones refractivas tanto en la visión lejana como en la visión próxima.

  1. Evaluación de la reproducibilidad de la retinoscopia dinámica monocular de Merchán

    Directory of Open Access Journals (Sweden)

    Lizbeth Acuña

    2009-12-01

    Full Text Available Objetivo: Evaluar la reproducibilidad de la retinoscopia dinámica monocular y su nivel de acuerdo con la retinoscopia estática binocular y monocular, retinoscopia de Nott y Método Estimado Monocular (MEM.Métodos: Se determinó la reproducibilidad entre los evaluadores y entre los métodos por medio del coeficiente de correlación intraclase (CCI y se establecieron los límites de acuerdo de Bland y Altman.Resultados: Se evaluaron 126 personas entre 5 y 39 años y se encontró una baja reproducibilidad interexaminador de la retinoscopia dinámica monocular en ambos ojos CCI ojo derecho: 0.49 (IC95% 0.36; 0.51; ojo izquierdo 0.51 (IC95% 0.38; 0.59. El límite de acuerdo entre evaluadores fue ±1.25 D. Al evaluar la reproducibilidad entre la retinoscopia dinámica monocular y la estática se observó que la mayor reproducibilidad se obtuvo con la estática binocular y monocular y, en visión próxima, entre el método estimado monocular y la retinoscopia de Nott.Conclusiones: La retinoscopia dinámica monocular no es una prueba reproducible y presenta diferencias clínicas significativas para determinar el estado refractivo, en cuanto a poder dióptrico y tipo de ametropía, por tanto, no se puede considerar dentro de la batería de exámenes aplicados para determinar diagnósticos y correcciones refractivas tanto en la visión lejana como en la visión próxima.

  2. Comparing the effectiveness of different displays in enhancing illusions of self-movement (vection).

    Science.gov (United States)

    Riecke, Bernhard E; Jordan, Jacqueline D

    2015-01-01

    Illusions of self-movement (vection) can be used in virtual reality (VR) and other applications to give users the embodied sensation that they are moving when physical movement is unfeasible or too costly. Whereas a large body of vection literature studied how various parameters of the presented visual stimulus affect vection, little is known how different display types might affect vection. As a step toward addressing this gap, we conducted three experiments to compare vection and usability parameters between commonly used VR displays, ranging from stereoscopic projection and 3D TV to high-end head-mounted display (HMD, NVIS SX111) and recent low-cost HMD (Oculus Rift). The last experiment also compared these two HMDs in their native full field of view (FOV) and a reduced, matched FOV of 72° × 45°. Participants moved along linear and curvilinear paths in the virtual environment, reported vection onset time, and rated vection intensity at the end of each trial. In addition, user ratings on immersion, motion sickness, vection, and overall preference were recorded retrospectively and compared between displays. Unexpectedly, there were no significant effects of display on vection measures. Reducing the FOV for the HMDs (from full to 72° × 45°) decreased vection onset latencies, but did not affect vection intensity. As predicted, curvilinear paths yielded earlier and more intense vection. Although vection has often been proposed to predict or even cause motion sickness, we observed no correlation for any of the displays studied. In conclusion, perceived self-motion and other user experience measures proved surprisingly tolerant toward changes in display type as long as the FOV was roughly matched. This suggests that display choice for vection research and VR applications can be largely based on other considerations as long as the provided FOV is sufficiently large.

  3. Comparing the effectiveness of different displays in enhancing illusions of self-movement (vection

    Directory of Open Access Journals (Sweden)

    Bernhard E. Riecke

    2015-06-01

    Full Text Available Illusions of self-movement (vection can be used in Virtual Reality (VR and other applications to give users the embodied sensation that they are moving when physical movement is unfeasible or too costly. Whereas a large body of vection literature studied how various parameters of the presented visual stimulus affect vection, little is known how different display types might affect vection. As a step towards addressing this gap, we conducted three experiments to compare vection and usability parameters between commonly used VR displays, ranging from stereoscopic projection and 3D TV to high-end head-mounted display (HMD, NVIS SX111 and recent low-cost HMD (Oculus Rift. The last experiment also compared these two HMDs in their native full field of view FOV and a reduced, matched FOV of 72×45°. Participants moved along linear and curvilinear paths in the virtual environment, reported vection onset time, and rated vection intensity at the end of each trial. In addition, user ratings on immersion, motion sickness, vection, and overall preference were recorded retrospectively and compared between displays. Unexpectedly, there were no significant effects of display on vection measures. Reducing the FOV for the HMDs (from full to 72×45° decreased vection onset latencies, but did not affect vection intensity. As predicted, curvilinear paths yielded earlier and more intense vection. Although vection has often been proposed to predict or even cause motion sickness, we observed no correlation for any of the displays studied. In conclusion, perceived self-motion and other user experience measures proved surprisingly tolerant towards changes in display type as long as the FOV was roughly matched. This suggests that display choice for vection research and VR applications can be largely based on other considerations as long as the provided FOV is sufficiently large.

  4. Induction of Monocular Stereopsis by Altering Focus Distance: A Test of Ames’s Hypothesis

    Directory of Open Access Journals (Sweden)

    Dhanraj Vishwanath

    2016-04-01

    Full Text Available Viewing a real three-dimensional scene or a stereoscopic image with both eyes generates a vivid phenomenal impression of depth known as stereopsis. Numerous reports have highlighted the fact that an impression of stereopsis can be induced in the absence of binocular disparity. A method claimed by Ames (1925 involved altering accommodative (focus distance while monocularly viewing a picture. This claim was tested on naïve observers using a method inspired by the observations of Gogel and Ogle on the equidistance tendency. Consistent with Ames’s claim, most observers reported that the focus manipulation induced an impression of stereopsis comparable to that obtained by monocular-aperture viewing.

  5. Embolic and nonembolic transient monocular visual field loss: a clinicopathologic review.

    Science.gov (United States)

    Petzold, Axel; Islam, Niaz; Hu, Han-Hwa; Plant, Gordon T

    2013-01-01

    Transient monocular blindness and amaurosis fugax are umbrella terms describing a range of patterns of transient monocular visual field loss (TMVL). The incidence rises from ≈1.5/100,000 in the third decade of life to ≈32/100,000 in the seventh decade of life. We review the vascular supply of the retina that provides an anatomical basis for the types of TMVL and discuss the importance of collaterals between the external and internal carotid artery territories and related blood flow phenomena. Next, we address the semiology of TMVL, focusing on onset, pattern, trigger factors, duration, recovery, frequency-associated features such as headaches, and on tests that help with the important differential between embolic and non-embolic etiologies.

  6. A monocular vision system based on cooperative targets detection for aircraft pose measurement

    Science.gov (United States)

    Wang, Zhenyu; Wang, Yanyun; Cheng, Wei; Chen, Tao; Zhou, Hui

    2017-08-01

    In this paper, a monocular vision measurement system based on cooperative targets detection is proposed, which can capture the three-dimensional information of objects by recognizing the checkerboard target and calculating of the feature points. The aircraft pose measurement is an important problem for aircraft’s monitoring and control. Monocular vision system has a good performance in the range of meter. This paper proposes an algorithm based on coplanar rectangular feature to determine the unique solution of distance and angle. A continuous frame detection method is presented to solve the problem of corners’ transition caused by symmetry of the targets. Besides, a displacement table test system based on three-dimensional precision and measurement system human-computer interaction software has been built. Experiment result shows that it has a precision of 2mm in the range of 300mm to 1000mm, which can meet the requirement of the position measurement in the aircraft cabin.

  7. Monocular trajectory intersection method for 3D motion measurement of a point target

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This article proposes a monocular trajectory intersection method,a videometrics measurement with a mature theoretical system to solve the 3D motion parameters of a point target.It determines the target’s motion parameters including its 3D trajectory and velocity by intersecting the parametric trajectory of a motion target and series of sight-rays by which a motion camera observes the target,in contrast with the regular intersection method for 3D measurement by which the sight-rays intersect at one point.The method offers an approach to overcome the technical failure of traditional monocular measurements for the 3D motion of a point target and thus extends the application fields of photogrammetry and computer vision.Wide application is expected in passive observations of motion targets on various mobile beds.

  8. A Novel Ship-Bridge Collision Avoidance System Based on Monocular Computer Vision

    Directory of Open Access Journals (Sweden)

    Yuanzhou Zheng

    2013-06-01

    Full Text Available The study aims to investigate the ship-bridge collision avoidance. A novel system for ship-bridge collision avoidance based on monocular computer vision is proposed in this study. In the new system, the moving ships are firstly captured by the video sequences. Then the detection and tracking of the moving objects have been done to identify the regions in the scene that correspond to the video sequences. Secondly, the quantity description of the dynamic states of the moving objects in the geographical coordinate system, including the location, velocity, orientation, etc, has been calculated based on the monocular vision geometry. Finally, the collision risk is evaluated and consequently the ship manipulation commands are suggested, aiming to avoid the potential collision. Both computer simulation and field experiments have been implemented to validate the proposed system. The analysis results have shown the effectiveness of the proposed system.

  9. Depth measurement using monocular stereo vision system: aspect of spatial discretization

    Science.gov (United States)

    Xu, Zheng; Li, Chengjin; Zhao, Xunjie; Chen, Jiabo

    2010-11-01

    The monocular stereo vision system, consisting of single camera with controllable focal length, can be used in 3D reconstruction. Applying the system for 3D reconstruction, must consider effects caused by digital camera. There are two possible methods to make the monocular stereo vision system. First one the distance between the target object and the camera image plane is constant and lens moves. The second method assumes that the lens position is constant and the image plane moves in respect to the target. In this paper mathematical modeling of two approaches is presented. We focus on iso-disparity surfaces to define the discretization effect on the reconstructed space. These models are implemented and simulated on Matlab. The analysis is used to define application constrains and limitations of these methods. The results can be also used to enhance the accuracy of depth measurement.

  10. Monocular trajectory intersection method for 3D motion measurement of a point target

    Institute of Scientific and Technical Information of China (English)

    YU QiFeng; SHANG Yang; ZHOU Jian; ZHANG XiaoHu; LI LiChun

    2009-01-01

    This article proposes a monocular trajectory intersection method,a videometrics measurement with a mature theoretical system to solve the 3D motion parameters of a point target.It determines the target's motion parameters including its 3D trajectory and velocity by intersecting the parametric trajectory of a motion target and series of sight-rays by which a motion camera observes the target,in contrast with the regular intersection method for 3D measurement by which the sight-rays intersect at one point.The method offers an approach to overcome the technical failure of traditional monocular measurements for the 3D motion of a point target and thus extends the application fields of photogrammetry and computer vision.Wide application is expected in passive observations of motion targets on various mobile beds.

  11. Large-scale monocular FastSLAM2.0 acceleration on an embedded heterogeneous architecture

    Science.gov (United States)

    Abouzahir, Mohamed; Elouardi, Abdelhafid; Bouaziz, Samir; Latif, Rachid; Tajer, Abdelouahed

    2016-12-01

    Simultaneous localization and mapping (SLAM) is widely used in many robotic applications and autonomous navigation. This paper presents a study of FastSLAM2.0 computational complexity based on a monocular vision system. The algorithm is intended to operate with many particles in a large-scale environment. FastSLAM2.0 was partitioned into functional blocks allowing a hardware software matching on a CPU-GPGPU-based SoC architecture. Performances in terms of processing time and localization accuracy were evaluated using a real indoor dataset. Results demonstrate that an optimized and efficient CPU-GPGPU partitioning allows performing accurate localization results and high-speed execution of a monocular FastSLAM2.0-based embedded system operating under real-time constraints.

  12. A Case of Recurrent Transient Monocular Visual Loss after Receiving Sildenafil

    Directory of Open Access Journals (Sweden)

    Asaad Ghanem Ghanem

    2011-01-01

    Full Text Available A 53-year-old man was attended to the Clinic Ophthalmic Center, Mansoura University, Egypt, with recurrent transient monocular visual loss after receiving sildenafil citrate (Viagra for erectile dysfunction. Examination for possible risk factors revealed mild hypercholesterolemia. Family history showed that his father had suffered from bilateral nonarteritic anterior ischemic optic neuropathy (NAION. Physicians might look for arteriosclerotic risk factors and family history of NAION among predisposing risk factors before prescribing sildenafil erectile dysfunction drugs.

  13. Benign pituitary adenoma associated with hyperostosis of the spenoid bone and monocular blindness. Case report.

    Science.gov (United States)

    Milas, R W; Sugar, O; Dobben, G

    1977-01-01

    The authors describe a case of benign chromophobe adenoma associated with hyperostosis of the lesser wing of the sphenoid bone and monocular blindness in a 38-year-old woman. The endocrinological and radiological evaluations were all suggestive of a meningioma. The diagnosis was established by biopsy of the tumor mass. After orbital decompression and removal of the tumor, the patient was treated with radiation therapy. Her postoperative course was uneventful, and her visual defects remained fixed.

  14. Augmented reality three-dimensional display with light field fusion.

    Science.gov (United States)

    Xie, Songlin; Wang, Peng; Sang, Xinzhu; Li, Chengyu

    2016-05-30

    A video see-through augmented reality three-dimensional display method is presented. The system that is used for dense viewpoint augmented reality presentation fuses the light fields of the real scene and the virtual model naturally. Inherently benefiting from the rich information of the light field, depth sense and occlusion can be handled under no priori depth information of the real scene. A series of processes are proposed to optimize the augmented reality performance. Experimental results show that the reconstructed fused 3D light field on the autostereoscopic display is well presented. The virtual model is naturally integrated into the real scene with a consistence between binocular parallax and monocular depth cues.

  15. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    Science.gov (United States)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  16. Monocular blur alters the tuning characteristics of stereopsis for spatial frequency and size.

    Science.gov (United States)

    Li, Roger W; So, Kayee; Wu, Thomas H; Craven, Ashley P; Tran, Truyet T; Gustafson, Kevin M; Levi, Dennis M

    2016-09-01

    Our sense of depth perception is mediated by spatial filters at different scales in the visual brain; low spatial frequency channels provide the basis for coarse stereopsis, whereas high spatial frequency channels provide for fine stereopsis. It is well established that monocular blurring of vision results in decreased stereoacuity. However, previous studies have used tests that are broadband in their spatial frequency content. It is not yet entirely clear how the processing of stereopsis in different spatial frequency channels is altered in response to binocular input imbalance. Here, we applied a new stereoacuity test based on narrow-band Gabor stimuli. By manipulating the carrier spatial frequency, we were able to reveal the spatial frequency tuning of stereopsis, spanning from coarse to fine, under blurred conditions. Our findings show that increasing monocular blur elevates stereoacuity thresholds 'selectively' at high spatial frequencies, gradually shifting the optimum frequency to lower spatial frequencies. Surprisingly, stereopsis for low frequency targets was only mildly affected even with an acuity difference of eight lines on a standard letter chart. Furthermore, we examined the effect of monocular blur on the size tuning function of stereopsis. The clinical implications of these findings are discussed.

  17. Short-term monocular patching boosts the patched eye’s response in visual cortex

    Science.gov (United States)

    Zhou, Jiawei; Baker, Daniel H.; Simard, Mathieu; Saint-Amour, Dave; Hess, Robert F.

    2015-01-01

    Abstract Purpose: Several recent studies have demonstrated that following short-term monocular deprivation in normal adults, the patched eye, rather than the unpatched eye, becomes stronger in subsequent binocular viewing. However, little is known about the site and nature of the underlying processes. In this study, we examine the underlying mechanisms by measuring steady-state visual evoked potentials (SSVEPs) as an index of the neural contrast response in early visual areas. Methods: The experiment consisted of three consecutive stages: a pre-patching EEG recording (14 minutes), a monocular patching stage (2.5 hours) and a post-patching EEG recording (14 minutes; started immediately after the removal of the patch). During the patching stage, a diffuser (transmits light but not pattern) was placed in front of one randomly selected eye. During the EEG recording stage, contrast response functions for each eye were measured. Results: The neural responses from the patched eye increased after the removal of the patch, whilst the responses from the unpatched eye remained the same. Such phenomena occurred under both monocular and dichoptic viewing conditions. Conclusions: We interpret this eye dominance plasticity in adult human visual cortex as homeostatic intrinsic plasticity regulated by an increase of contrast-gain in the patched eye. PMID:26410580

  18. Measuring method for the object pose based on monocular vision technology

    Science.gov (United States)

    Sun, Changku; Zhang, Zimiao; Wang, Peng

    2010-11-01

    Position and orientation estimation of the object, which can be widely applied in the fields as robot navigation, surgery, electro-optic aiming system, etc, has an important value. The monocular vision positioning algorithm which is based on the point characteristics is studied and new measurement method is proposed in this paper. First, calculate the approximate coordinates of the five reference points which can be used as the initial value of iteration in the camera coordinate system according to weakp3p; Second, get the exact coordinates of the reference points in the camera coordinate system through iterative calculation with the constraints relationship of the reference points; Finally, get the position and orientation of the object. So the measurement model of monocular vision is constructed. In order to verify the accuracy of measurement model, a plane target using infrared LED as reference points is designed to finish the verification of the measurement method and the corresponding image processing algorithm is studied. And then The monocular vision experimental system is established. Experimental results show that the translational positioning accuracy reaches +/-0.05mm and rotary positioning accuracy reaches +/-0.2o .

  19. Monocular deprivation of Fourier phase information boosts the deprived eye's dominance during interocular competition but not interocular phase combination.

    Science.gov (United States)

    Bai, Jianying; Dong, Xue; He, Sheng; Bao, Min

    2017-06-03

    Ocular dominance has been extensively studied, often with the goal to understand neuroplasticity, which is a key characteristic within the critical period. Recent work on monocular deprivation, however, demonstrates residual neuroplasticity in the adult visual cortex. After deprivation of patterned inputs by monocular patching, the patched eye becomes more dominant. Since patching blocks both the Fourier amplitude and phase information of the input image, it remains unclear whether deprivation of the Fourier phase information alone is able to reshape eye dominance. Here, for the first time, we show that removing of the phase regularity without changing the amplitude spectra of the input image induced a shift of eye dominance toward the deprived eye, but only if the eye dominance was measured with a binocular rivalry task rather than an interocular phase combination task. These different results indicate that the two measurements are supported by different mechanisms. Phase integration requires the fusion of monocular images. The fused percept highly relies on the weights of the phase-sensitive monocular neurons that respond to the two monocular images. However, binocular rivalry reflects the result of direct interocular competition that strongly weights the contour information transmitted along each monocular pathway. Monocular phase deprivation may not change the weights in the integration (fusion) mechanism much, but alters the balance in the rivalry (competition) mechanism. Our work suggests that ocular dominance plasticity may occur at different stages of visual processing, and that homeostatic compensation also occurs for the lack of phase regularity in natural scenes. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Depth reversals in stereoscopic displays driven by apparent size

    Science.gov (United States)

    Sacher, Gunnar; Hayes, Amy; Thornton, Ian M.; Sereno, Margaret E.; Malony, Allen D.

    1998-04-01

    In visual scenes, depth information is derived from a variety of monocular and binocular cues. When in conflict, a monocular cue is sometimes able to override the binocular information. We examined the accuracy of relative depth judgments in orthographic, stereoscopic displays and found that perceived relative size can override binocular disparity as a depth cue in a situation where the relative size information is itself generated from disparity information, not from retinal size difference. A size discrimination task confirmed the assumption that disparity information was perceived and used to generate apparent size differences. The tendency for the apparent size cue to override disparity information can be modulated by varying the strength of the apparent size cue. In addition, an analysis of reaction times provides supporting evidence for this novel depth reversal effect. We believe that human perception must be regarded as an important component of stereoscopic applications. Hence, if applications are to be effective and accurate, it is necessary to take into account the richness and complexity of the human visual perceptual system that interacts with them. We discuss implications of this and similar research for human performance in virtual environments, the design of visual presentations for virtual worlds, and the design of visualization tools.

  1. A virtual reality oriented clinical experiment on post-stroke rehabilitation: performance and preference comparison among different stereoscopic displays­

    Science.gov (United States)

    Yeh, Shih-Ching; Rizzo, Albert; Sawchuk, Alexander A.

    2007-02-01

    We have developed a novel VR task: the Dynamic Reaching Test, that measures human forearm movement in 3D space. In this task, three different stereoscopic displays: autostereoscopic (AS), shutter glasses (SG) and head mounted display (HMD), are used in tests in which subjects must catch a virtual ball thrown at them. Parameters such as percentage of successful catches, movement efficiency (subject path length compared to minimal path length), and reaction time are measured to evaluate differences in 3D perception among the three stereoscopic displays. The SG produces the highest percentage of successful catches, though the difference between the three displays is small, implying that users can perform the VR task with any of the displays. The SG and HMD produced the best movement efficiency, while the AS was slightly less efficient. Finally, the AS and HMD produced similar reaction times that were slightly higher (by 0.1 s) than the SG. We conclude that SG and HMD displays were the most effective, but only slightly better than the AS display.

  2. Cirurgia monocular para esotropias de grande ângulo: um novo paradigma Monocular surgery for large-angle esotropias: a new paradigm

    Directory of Open Access Journals (Sweden)

    Edmilson Gigante

    2009-02-01

    Full Text Available OBJETIVO: Demonstrar a viabilidade da cirurgia monocular no tratamento das esotropias de grande ângulo, praticando-se amplos recuos do reto medial (6 a 10 mm e grandes ressecções do reto lateral (8 a 10 mm. MÉTODOS: Foram operados, com anestesia geral e sem reajustes per ou pósoperatórios, 46 pacientes com esotropias de 50δ ou mais, relativamente comitantes. Os métodos utilizados para refratometria, medida da acuidade visual e do ângulo de desvio, foram os, tradicionalmente, utilizados em estrabologia. No pós-operatório, além das medidas na posição primária do olhar, foi feita uma avaliação da motilidade do olho operado, em adução e em abdução. RESULTADOS: Foram considerados quatro grupos de estudo, correspondendo a quatro períodos de tempo: uma semana, seis meses, dois anos e quatro a sete anos. Os resultados para o ângulo de desvio pós-cirúrgico foram compatíveis com os da literatura em geral e mantiveram-se estáveis ao longo do tempo. A motilidade do olho operado apresentou pequena limitação em adução e nenhuma em abdução, contrariando o encontrado na literatura estrabológica. Comparando os resultados de adultos com os de crianças e de amblíopes com não amblíopes, não foram encontradas diferenças estatisticamente significativas entre eles. CONCLUSÃO:Em face dos resultados encontrados, entende-se ser possível afirmar que a cirurgia monocular de recuo-ressecção pode ser considerada opção viável para o tratamento das esotropias de grande ângulo, tanto para adultos quanto para crianças, bem como para amblíopes e não amblíopes.PURPOSE: To demonstrate the feasibility of monocular surgery in the treatment of large-angle esotropias through large recessions of the medial rectus (6 to 10 mm and large resections of the lateral rectus (8 to 10 mm. METHODS: 46 patients were submitted to surgery. They had esotropias of 50Δor more that were relatively comitant. The patients were operated under general

  3. Displaying gray shades in liquid crystal displays

    Indian Academy of Sciences (India)

    T N Ruckmongathan

    2003-08-01

    Quality of image in a display depends on the contrast, colour, resolution and the number of gray shades. A large number of gray shades is necessary to display images without any contour lines. These contours are due to limited number of gray shades in the display causing abrupt changes in grayness of the image, while the original image has a gradual change in brightness. Amplitude modulation has the capability to display a large number of gray shades with minimum number of time intervals [1,2]. This paper will cover the underlying principle of amplitude modulation, some variants and its extension to multi-line addressing. Other techniques for displaying gray shades in passive matrix displays are reviewed for the sake of comparison.

  4. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    Science.gov (United States)

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.

  5. More clinical observations on migraine associated with monocular visual symptoms in an Indian population

    Directory of Open Access Journals (Sweden)

    Vishal Jogi

    2016-01-01

    Full Text Available Context: Retinal migraine (RM is considered as one of the rare causes of transient monocular visual loss (TMVL and has not been studied in Indian population. Objectives: The study aims to analyze the clinical and investigational profile of patients with RM. Materials and Methods: This is an observational prospective analysis of 12 cases of TMVL fulfilling the International Classification of Headache Disorders-2nd edition (ICHD-II criteria of RM examined in Neurology and Ophthalmology Outpatient Department (OPD of Postgraduate Institute of Medical Education and Research (PGIMER, Chandigarh from July 2011 to October 2012. Results: Most patients presented in 3 rd and 4 th decade with equal sex distribution. Seventy-five percent had antecedent migraine without aura (MoA and 25% had migraine with Aura (MA. Headache was ipsilateral to visual symptoms in 67% and bilateral in 33%. TMVL preceded headache onset in 58% and occurred during headache episode in 42%. Visual symptoms were predominantly negative except in one patient who had positive followed by negative symptoms. Duration of visual symptoms was variable ranging from 30 s to 45 min. None of the patient had permanent monocular vision loss. Three patients had episodes of TMVL without headache in addition to the symptom constellation defining RM. Most of the tests done to rule out alternative causes were normal. Magnetic resonance imaging (MRI brain showed nonspecific white matter changes in one patient. Visual-evoked potential (VEP showed prolonged P100 latencies in two cases. Patent foramen ovale was detected in one patient. Conclusions: RM is a definite subtype of migraine and should remain in the ICHD classification. It should be kept as one of the differential diagnosis of transient monocular vision loss. We propose existence of "acephalgic RM" which may respond to migraine prophylaxis.

  6. P2-1: Visual Short-Term Memory Lacks Sensitivity to Stereoscopic Depth Changes but is Much Sensitive to Monocular Depth Changes

    Directory of Open Access Journals (Sweden)

    Hae-In Kang

    2012-10-01

    Full Text Available Depth from both binocular disparity and monocular depth cues presumably is one of most salient features that would characterize a variety of visual objects in our daily life. Therefore it is plausible to expect that human vision should be good at perceiving objects' depth change arising from binocular disparities and monocular pictorial cues. However, what if the estimated depth needs to be remembered in visual short-term memory (VSTM rather than just perceived? In a series of experiments, we asked participants to remember depth of items in an array at the beginning of each trial. A set of test items followed after the memory array, and the participants were asked to report if one of the items in the test array have changed its depth from the remembered items or not. The items would differ from each other in three different depth conditions: (1 stereoscopic depth under binocular disparity manipulations, (2 monocular depth under pictorial cue manipulations, and (3 both stereoscopic and monocular depth. The accuracy of detecting depth change was substantially higher in the monocular condition than in the binocular condition, and the accuracy in the both-depth condition was moderately improved compared to the monocular condition. These results indicate that VSTM benefits more from monocular depth than stereoscopic depth, and further suggests that storage of depth information into VSTM would require both binocular and monocular information for its optimal memory performance.

  7. Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.

    Science.gov (United States)

    Fischmeister, Florian Ph S; Bauer, Herbert

    2006-10-01

    Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.

  8. Three dimensional monocular human motion analysis in end-effector space

    DEFF Research Database (Denmark)

    Hauberg, Søren; Lapuyade, Jerome; Engell-Nørregård, Morten Pol

    2009-01-01

    In this paper, we present a novel approach to three dimensional human motion estimation from monocular video data. We employ a particle filter to perform the motion estimation. The novelty of the method lies in the choice of state space for the particle filter. Using a non-linear inverse kinemati...... solver allows us to perform the filtering in end-effector space. This effectively reduces the dimensionality of the state space while still allowing for the estimation of a large set of motions. Preliminary experiments with the strategy show good results compared to a full-pose tracker....

  9. Effect of ophthalmic filter thickness on predicted monocular dichromatic luminance and chromaticity discrimination.

    Science.gov (United States)

    Richer, S P; Little, A C; Adams, A J

    1984-11-01

    The majority of ophthalmic filters, whether they be in the form of spectacles or contact lenses, are absorbance type filters. Although color vision researchers routinely provide spectrophotometric transmission profiles of filters, filter thickness is rarely specified. In this paper, colorimetric tools and volume color theory are used to show that the color of a filter as well as its physical properties are altered dramatically by changes in thickness. The effect of changes in X-Chrom filter thickness on predicted monocular dichromatic luminance and chromaticity discrimination is presented.

  10. Estimating 3D positions and velocities of projectiles from monocular views.

    Science.gov (United States)

    Ribnick, Evan; Atev, Stefan; Papanikolopoulos, Nikolaos P

    2009-05-01

    In this paper, we consider the problem of localizing a projectile in 3D based on its apparent motion in a stationary monocular view. A thorough theoretical analysis is developed, from which we establish the minimum conditions for the existence of a unique solution. The theoretical results obtained have important implications for applications involving projectile motion. A robust, nonlinear optimization-based formulation is proposed, and the use of a local optimization method is justified by detailed examination of the local convexity structure of the cost function. The potential of this approach is validated by experimental results.

  11. Monocular feature tracker for low-cost stereo vision control of an autonomous guided vehicle (AGV)

    Science.gov (United States)

    Pearson, Chris M.; Probert, Penelope J.

    1994-02-01

    We describe a monocular feature tracker (MFT), the first stage of a low cost stereoscopic vision system for use on an autonomous guided vehicle (AGV) in an indoor environment. The system does not require artificial markings or other beacons, but relies upon accurate knowledge of the AGV motion. Linear array cameras (LAC) are used to reduce the data and processing bandwidths. The limited information given by LAC require modelling of the expected features. We model an obstacle as a vertical line segment touching the floor, and can distinguish between these obstacles and most other clutter in an image sequence. Detection of these obstacles is sufficient information for local AGV navigation.

  12. Multispectral polarization viewing angle analysis of circular polarized stereoscopic 3D displays

    Science.gov (United States)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2010-02-01

    In this paper we propose a method to characterize polarization based stereoscopic 3D displays using multispectral Fourier optics viewing angle measurements. Full polarization analysis of the light emitted by the display in the full viewing cone is made at 31 wavelengths in the visible range. Vertical modulation of the polarization state is observed and explained by the position of the phase shift filter into the display structure. In addition, strong spectral dependence of the ellipticity and polarization degree is observed. These features come from the strong spectral dependence of the phase shift film and introduce some imperfections (color shifts and reduced contrast). Using the measured transmission properties of the two glasses filters, the resulting luminance across each filter is computed for left and right eye views. Monocular contrast for each eye and binocular contrasts are performed in the observer space, and Qualified Monocular and Binocular Viewing Spaces (QMVS and QBVS) can be deduced in the same way as auto-stereoscopic 3D displays allowing direct comparison of the performances.

  13. Invisible Display in Aluminum

    DEFF Research Database (Denmark)

    Prichystal, Jan Phuklin; Hansen, Hans Nørgaard; Bladt, Henrik Henriksen

    2005-01-01

    for an integrated display in a metal surface is often ruled by design and functionality of a product. The integration of displays in metal surfaces requires metal removal in order to clear the area of the display to some extent. The idea behind an invisible display in Aluminum concerns the processing of a metal...

  14. An Analytical Measuring Rectification Algorithm of Monocular Systems in Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Deshi Li

    2016-01-01

    Full Text Available Range estimation is crucial for maintaining a safe distance, in particular for vision navigation and localization. Monocular autonomous vehicles are appropriate for outdoor environment due to their mobility and operability. However, accurate range estimation using vision system is challenging because of the nonholonomic dynamics and susceptibility of vehicles. In this paper, a measuring rectification algorithm for range estimation under shaking conditions is designed. The proposed method focuses on how to estimate range using monocular vision when a shake occurs and the algorithm only requires the pose variations of the camera to be acquired. Simultaneously, it solves the problem of how to assimilate results from different kinds of sensors. To eliminate measuring errors by shakes, we establish a pose-range variation model. Afterwards, the algebraic relation between distance increment and a camera’s poses variation is formulated. The pose variations are presented in the form of roll, pitch, and yaw angle changes to evaluate the pixel coordinate incensement. To demonstrate the superiority of our proposed algorithm, the approach is validated in a laboratory environment using Pioneer 3-DX robots. The experimental results demonstrate that the proposed approach improves in the range accuracy significantly.

  15. Stereo improves 3D shape discrimination even when rich monocular shape cues are available.

    Science.gov (United States)

    Lee, Young Lim; Saunders, Jeffrey A

    2011-08-17

    We measured the ability to discriminate 3D shapes across changes in viewpoint and illumination based on rich monocular 3D information and tested whether the addition of stereo information improves shape constancy. Stimuli were images of smoothly curved, random 3D objects. Objects were presented in three viewing conditions that provided different 3D information: shading-only, stereo-only, and combined shading and stereo. Observers performed shape discrimination judgments for sequentially presented objects that differed in orientation by rotation of 0°-60° in depth. We found that rotation in depth markedly impaired discrimination performance in all viewing conditions, as evidenced by reduced sensitivity (d') and increased bias toward judging same shapes as different. We also observed a consistent benefit from stereo, both in conditions with and without change in viewpoint. Results were similar for objects with purely Lambertian reflectance and shiny objects with a large specular component. Our results demonstrate that shape perception for random 3D objects is highly viewpoint-dependent and that stereo improves shape discrimination even when rich monocular shape cues are available.

  16. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Yanhua Jiang

    2014-09-01

    Full Text Available This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.

  17. Cortical dynamics of three-dimensional form, color, and brightness perception. 1. Monocular theory

    Energy Technology Data Exchange (ETDEWEB)

    Grossberg, S.

    1987-01-01

    A real-time visual-processing theory is developed to explain how three-dimensional form, color, and brightness percepts are coherently synthesized. The theory describes how several fundamental uncertainty principles that limit the computation of visual information at individual processing stages are resolved through parallel and hierarchical interactions among several processing stages. The theory provides unified analysis and many predictions of data about stereopsis, binocular rivalry, hyperacuity, McCollough effect, textural grouping, border distinctness, surface perception, monocular and binocular brightness percepts, filling-in, metacontrast, transparency, figural aftereffects, lateral inhibition within spatial frequency channels, proximity luminance covariance, tissue contrast, motion segmentation, and illusory figures, as well as about reciprocal interactions among the hypercolumns, blobs, and stripes of cortical areas V1, V2, and V4. Monocular and binocular interactions between a Boundary Contour (BC) System and a Feature Contour (FC) System are developed. The BC System, defined by a hierarchy of oriented interactions, synthesizes an emergent and coherent binocular boundary segmentation from combinations of unoriented and oriented scenic elements.

  18. A trajectory and orientation reconstruction method for moving objects based on a moving monocular camera.

    Science.gov (United States)

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-03-09

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional "point intersection" to "trajectory intersection" in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable.

  19. Monocular 3D Reconstruction and Augmentation of Elastic Surfaces with Self-Occlusion Handling.

    Science.gov (United States)

    Haouchine, Nazim; Dequidt, Jeremie; Berger, Marie-Odile; Cotin, Stephane

    2015-12-01

    This paper focuses on the 3D shape recovery and augmented reality on elastic objects with self-occlusions handling, using only single view images. Shape recovery from a monocular video sequence is an underconstrained problem and many approaches have been proposed to enforce constraints and resolve the ambiguities. State-of-the art solutions enforce smoothness or geometric constraints, consider specific deformation properties such as inextensibility or resort to shading constraints. However, few of them can handle properly large elastic deformations. We propose in this paper a real-time method that uses a mechanical model and able to handle highly elastic objects. The problem is formulated as an energy minimization problem accounting for a non-linear elastic model constrained by external image points acquired from a monocular camera. This method prevents us from formulating restrictive assumptions and specific constraint terms in the minimization. In addition, we propose to handle self-occluded regions thanks to the ability of mechanical models to provide appropriate predictions of the shape. Our method is compared to existing techniques with experiments conducted on computer-generated and real data that show the effectiveness of recovering and augmenting 3D elastic objects. Additionally, experiments in the context of minimally invasive liver surgery are also provided and results on deformations with the presence of self-occlusions are exposed.

  20. Mobile Target Tracking Based on Hybrid Open-Loop Monocular Vision Motion Control Strategy

    Directory of Open Access Journals (Sweden)

    Cao Yuan

    2015-01-01

    Full Text Available This paper proposes a new real-time target tracking method based on the open-loop monocular vision motion control. It uses the particle filter technique to predict the moving target’s position in an image. Due to the properties of the particle filter, the method can effectively master the motion behaviors of the linear and nonlinear. In addition, the method uses the simple mathematical operation to transfer the image information in the mobile target to its real coordinate information. Therefore, it requires few operating resources. Moreover, the method adopts the monocular vision approach, which is a single camera, to achieve its objective by using few hardware resources. Firstly, the method evaluates the next time’s position and size of the target in an image. Later, the real position of the objective corresponding to the obtained information is predicted. At last, the mobile robot should be controlled in the center of the camera’s vision. The paper conducts the tracking test to the L-type and the S-type and compares with the Kalman filtering method. The experimental results show that the method achieves a better tracking effect in the L-shape experiment, and its effect is superior to the Kalman filter technique in the L-type or S-type tracking experiment.

  1. Cataract surgery: emotional reactions of patients with monocular versus binocular vision

    Directory of Open Access Journals (Sweden)

    Roberta Ferrari Marback

    2012-12-01

    Full Text Available PURPOSE: To analyze emotional reactions related to cataract surgery in two groups of patients (monocular vision - Group 1; binocular vision - Group 2. METHODS: A transversal comparative study was performed using a structured questionnaire from a previous exploratory study before cataract surgery. RESULTS: 206 patients were enrolled in the study, 96 individuals in Group 1 (69.3 ± 10.4 years and 110 in Group 2 (68.2 ± 10.2 years. Most patients in group 1 (40.6% and 22.7% of group 2, reported fear of surgery (p<0.001. The most important causes of fear were: possibility of blindness, ocular complications and death during surgery. The most prevalent feelings among the groups were doubts about good results and nervousness. CONCLUSION: Patients with monocular vision reported more fear and doubts related to surgical outcomes. Thus, it is necessary that phisycians considers such emotional reactions and invest more time than usual explaining the risks and the benefits of cataract surgery.Ouvir

  2. Perception of 3D spatial relations for 3D displays

    Science.gov (United States)

    Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.

    2004-05-01

    We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.

  3. Handbook of display technology

    CERN Document Server

    Castellano, Joseph A

    1992-01-01

    This book presents a comprehensive review of technical and commercial aspects of display technology. It provides design engineers with the information needed to select proper technology for new products. The book focuses on flat, thin displays such as light-emitting diodes, plasma display panels, and liquid crystal displays, but it also includes material on cathode ray tubes. Displays include a large number of products from televisions, auto dashboards, radios, and household appliances, to gasoline pumps, heart monitors, microwave ovens, and more.For more information on display tech

  4. Intermittent exotropia: comparative surgical results of lateral recti-recession and monocular recess-resect Exotropia intermitente: comparação dos resultados cirúrgicos entre retrocesso dos retos laterais e retrocesso-ressecção monocular

    Directory of Open Access Journals (Sweden)

    Vanessa Macedo Batista Fiorelli

    2007-06-01

    Full Text Available PURPOSE: To compare the results between recession of the lateral recti and monocular recess-resect procedure for the correction of the basic type of intermittent exotropia. METHODS: 115 patients with intermittent exotropia were submitted to surgery. The patients were divided into 4 groups, according to the magnitude of preoperative deviation and the surgical procedure was subsequently performed. Well compensated orthophoria or exo-or esophoria were considered surgical success, with minimum of 1 year follow-up after the operation. RESULTS: Success was obtained in 69% of the patients submitted to recession of the lateral recti, and in 77% submitted to monocular recess-resect. In the groups with deviations between 12 PD and 25 PD, surgical success was observed in 74% of the patients submitted to recession of the lateral recti and in 78% of the patients submitted to monocular recess-resect. (p=0.564. In the group with deviations between 26 PD and 35 PD, surgical success was observed in 65% out of the patients submitted to recession of the lateral recti and in 75% of the patients submitted to monocular recess-resect. (p=0.266. CONCLUSION: recession of lateral recti and monocular recess-resect were equally effective in correcting basic type intermittent exotropia according to its preoperative deviation in primary position.OBJETIVO: Comparar os resultados entre o retrocesso dos retos laterais e retrocesso-ressecção monocular, para correção de exotropia intermitente do tipo básico. MÉTODOS: Foram selecionados 115 prontuários de pacientes portadores de exotropia intermitente do tipo básico submetidos a cirurgia no período entre janeiro de 1991 e dezembro de 2001. Os planejamentos cirúrgicos seguiram orientação do setor de Motilidade Extrínseca Ocular da Clínica Oftalmológica da Santa Casa de São Paulo e basearam-se na magnitude do desvio na posição primária do olhar. Os pacientes foram divididos em 4 grupos, de acordo com a magnitude

  5. The perceived visual direction of monocular objects in random-dot stereograms is influenced by perceived depth and allelotropia.

    Science.gov (United States)

    Hariharan-Vilupuru, Srividhya; Bedell, Harold E

    2009-01-01

    The proposed influence of objects that are visible to both eyes on the perceived direction of an object that is seen by only one eye is known as the "capture of binocular visual direction". The purpose of this study was to evaluate whether stereoscopic depth perception is necessary for the "capture of binocular visual direction" to occur. In one pair of experiments, perceived alignment between two nearby monocular lines changed systematically with the magnitude and direction of horizontal but not vertical disparity. In four of the five observers, the effect of horizontal disparity on perceived alignment depended on which eye viewed the monocular lines. In additional experiments, the perceived alignment between the monocular lines changed systematically with the magnitude and direction of both horizontal and vertical disparities when the monocular line separation was increased from 1.1 degrees to 3.3 degrees . These results indicate that binocular capture depends on the perceived depth that results from horizontal retinal image disparity as well as allelotropia, or the averaging of local-sign information. Our data suggest that, during averaging, different weights are afforded to the local-sign information in the two eyes, depending on whether the separation between binocularly viewed targets is horizontal or vertical.

  6. Measuring perceived depth in natural images and study of its relation with monocular and binocular depth cues

    Science.gov (United States)

    Lebreton, Pierre; Raake, Alexander; Barkowsky, Marcus; Le Callet, Patrick

    2014-03-01

    The perception of depth in images and video sequences is based on different depth cues. Studies have considered depth perception threshold as a function of viewing distance (Cutting and Vishton, 1995), the combination of different monocular depth cues and their quantitative relation with binocular depth cues and their different possible type of interactions (Landy, l995). But these studies only consider artificial stimuli and none of them attempts to provide a quantitative contribution of monocular and binocular depth cues compared to each other in the specific context of natural images. This study targets this particular application case. The evaluation of the strength of different depth cues compared to each other using a carefully designed image database to cover as much as possible different combinations of monocular (linear perspective, texture gradient, relative size and defocus blur) and binocular depth cues. The 200 images were evaluated in two distinct subjective experiments to evaluate separately perceived depth and different monocular depth cues. The methodology and the description of the definition of the different scales will be detailed. The image database (DC3Dimg) is also released for the scientific community.

  7. Monocular SLAM for Visual Odometry: A Full Approach to the Delayed Inverse-Depth Feature Initialization Method

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2012-01-01

    Full Text Available This paper describes in a detailed manner a method to implement a simultaneous localization and mapping (SLAM system based on monocular vision for applications of visual odometry, appearance-based sensing, and emulation of range-bearing measurements. SLAM techniques are required to operate mobile robots in a priori unknown environments using only on-board sensors to simultaneously build a map of their surroundings; this map will be needed for the robot to track its position. In this context, the 6-DOF (degree of freedom monocular camera case (monocular SLAM possibly represents the harder variant of SLAM. In monocular SLAM, a single camera, which is freely moving through its environment, represents the sole sensory input to the system. The method proposed in this paper is based on a technique called delayed inverse-depth feature initialization, which is intended to initialize new visual features on the system. In this work, detailed formulation, extended discussions, and experiments with real data are presented in order to validate and to show the performance of the proposal.

  8. The Effect of Long Term Monocular Occlusion on Vernier Threshold: Elasticity in the Young Adult Visual System.

    Science.gov (United States)

    1986-06-01

    experiment, Brown and Salinger (1975) found a decrease of the X-cell 2 population in the lateral geniculate body of the adult cat. These investigators...D.L., and Salinger , W.L., "Loss of X-Cells in Lateral Geniculate Nucleus with Monocular Paralysis. Neural Plasticity in the Adult Cat", Science, 189

  9. [EXPERIMENTAL TESTING OF THE OPERATOR'S PERCEPTION OF SYMBOLIC INFORMATION ON THE HELMET-MOUNTED DISPLAY DEPENDING ON THE STRUCTURAL COMPLEXITY OF VISUAL ENVIRONMENT].

    Science.gov (United States)

    Lapa, V V; Ivanov, A I; Davydov, V V; Ryabinin, V A; Golosov S Yu

    2015-01-01

    The experiments showed that pilot's perception of symbolic information on the helmet-mounted display (HMD) depends on type of HMD (mono- or binocular), and structural complexity of the background image. Complex background extends time and increases errors in perception, particularly when monocular HMD is used. In extremely complicated visual situations (symbolic information on a background intricately structured by supposition of a TV image on real visual environment) significantly increases time and lowers precision of symbols perception no matter what the HMD type.

  10. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix

    2014-06-22

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  11. Lunar Sample Display Locations

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA provides a number of lunar samples for display at museums, planetariums, and scientific expositions around the world. Lunar displays are open to the public....

  12. Brief monocular deprivation as an assay of short-term visual sensory plasticity in schizophrenia – the binocular effect.

    Directory of Open Access Journals (Sweden)

    John J Foxe

    2013-12-01

    Full Text Available Background: Visual sensory processing deficits are consistently observed in schizophrenia, with clear amplitude reduction of the visual evoked potential (VEP during the initial 50-150 milliseconds of processing. Similar deficits are seen in unaffected first-degree relatives and drug-naïve first-episode patients, pointing to these deficits as potential endophenotypic markers. Schizophrenia is also associated with deficits in neural plasticity, implicating dysfunction of both glutamatergic and gabaergic systems. Here, we sought to understand the intersection of these two domains, asking whether short-term plasticity during early visual processing is specifically affected in schizophrenia. Methods: Brief periods of monocular deprivation induce relatively rapid changes in the amplitude of the early VEP – i.e. short-term plasticity. Twenty patients and twenty non-psychiatric controls participated. VEPs were recorded during binocular viewing, and were compared to the sum of VEP responses during brief monocular viewing periods (i.e. Left-eye + Right-eye viewing. Results: Under monocular conditions, neurotypical controls exhibited an effect that patients failed to demonstrate. That is, the amplitude of the summed monocular VEPs was robustly greater than the amplitude elicited binocularly during the initial sensory processing period. In patients, this binocular effect was absent. Limitations: Patients were all medicated. Ideally, this study would also include first-episode unmedicated patients.Conclusions: These results suggest that short-term compensatory mechanisms that allow healthy individuals to generate robust VEPs in the context of monocular deprivation are not effectively activated in patients with schizophrenia. This simple assay may provide a useful biomarker of short-term plasticity in the psychotic disorders and a target endophenotype for therapeutic interventions.

  13. Accurate and Robust Attitude Estimation Using MEMS Gyroscopes and a Monocular Camera

    Science.gov (United States)

    Kobori, Norimasa; Deguchi, Daisuke; Takahashi, Tomokazu; Ide, Ichiro; Murase, Hiroshi

    In order to estimate accurate rotations of mobile robots and vehicle, we propose a hybrid system which combines a low-cost monocular camera with gyro sensors. Gyro sensors have drift errors that accumulate over time. On the other hand, a camera cannot obtain the rotation continuously in the case where feature points cannot be extracted from images, although the accuracy is better than gyro sensors. To solve these problems we propose a method for combining these sensors based on Extended Kalman Filter. The errors of the gyro sensors are corrected by referring to the rotations obtained from the camera. In addition, by using the reliability judgment of camera rotations and devising the state value of the Extended Kalman Filter, even when the rotation is not continuously observable from the camera, the proposed method shows a good performance. Experimental results showed the effectiveness of the proposed method.

  14. Extracting hand articulations from monocular depth images using curvature scale space descriptors

    Institute of Scientific and Technical Information of China (English)

    Shao-fan WANG[1; Chun LI[1; De-hui KONG[1; Bao-cai YIN[2,1,3

    2016-01-01

    We propose a framework of hand articulation detection from a monocular depth image using curvature scale space (CSS) descriptors. We extract the hand contour from an input depth image, and obtain the fingertips and finger-valleys of the contour using the local extrema of a modified CSS map of the contour. Then we recover the undetected fingertips according to the local change of depths of points in the interior of the contour. Compared with traditional appearance-based approaches using either angle detectors or convex hull detectors, the modified CSS descriptor extracts the fingertips and finger-valleys more precisely since it is more robust to noisy or corrupted data; moreover, the local extrema of depths recover the fingertips of bending fingers well while traditional appearance-based approaches hardly work without matching models of hands. Experimental results show that our method captures the hand articulations more precisely compared with three state-of-the-art appearance-based approaches.

  15. Extracting hand articulations from monocular depth images using curvature scale space descriptors

    Institute of Scientific and Technical Information of China (English)

    Shao-fan WANG; Chun LI; De-hui KONG; Bao-cai YIN

    2016-01-01

    We propose a framework of hand articulation detection from a monocular depth image using curvature scale space (CSS) descriptors. We extract the hand contour from an input depth image, and obtain the fingertips and finger-valleys of the contour using the local extrema of a modified CSS map of the contour. Then we recover the undetected fingertips according to the local change of depths of points in the interior of the contour. Compared with traditional appearance-based approaches using either angle detectors or convex hull detectors, the modified CSS descriptor extracts the fingertips and finger-valleys more precisely since it is more robust to noisy or corrupted data;moreover, the local extrema of depths recover the fingertips of bending fingers well while traditional appearance-based approaches hardly work without matching models of hands. Experimental results show that our method captures the hand articulations more precisely compared with three state-of-the-art appearance-based approaches.

  16. Mobile Robot Simultaneous Localization and Mapping Based on a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Songmin Jia

    2016-01-01

    Full Text Available This paper proposes a novel monocular vision-based SLAM (Simultaneous Localization and Mapping algorithm for mobile robot. In this proposed method, the tracking and mapping procedures are split into two separate tasks and performed in parallel threads. In the tracking thread, a ground feature-based pose estimation method is employed to initialize the algorithm for the constraint moving of the mobile robot. And an initial map is built by triangulating the matched features for further tracking procedure. In the mapping thread, an epipolar searching procedure is utilized for finding the matching features. A homography-based outlier rejection method is adopted for rejecting the mismatched features. The indoor experimental results demonstrate that the proposed algorithm has a great performance on map building and verify the feasibility and effectiveness of the proposed algorithm.

  17. Navigation system for a small size lunar exploration rover with a monocular omnidirectional camera

    Science.gov (United States)

    Laîné, Mickaël.; Cruciani, Silvia; Palazzolo, Emanuele; Britton, Nathan J.; Cavarelli, Xavier; Yoshida, Kazuya

    2016-07-01

    A lunar rover requires an accurate localisation system in order to operate in an uninhabited environment. However, every additional piece of equipment mounted on it drastically increases the overall cost of the mission. This paper reports a possible solution for a micro-rover using a sole monocular omnidirectional camera. Our approach relies on a combination of feature tracking and template matching for Visual Odometry. The results are afterwards refined using a Graph-Based SLAM algorithm, which also provides a sparse reconstruction of the terrain. We tested the algorithm on a lunar rover prototype in a lunar analogue environment and the experiments show that the estimated trajectory is accurate and the combination with the template matching algorithm allows an otherwise poor detection of spot turns.

  18. The effect of monocular depth cues on the detection of moving objects by moving observers.

    Science.gov (United States)

    Royden, Constance S; Parsons, Daniel; Travatello, Joshua

    2016-07-01

    An observer moving through the world must be able to identify and locate moving objects in the scene. In principle, one could accomplish this task by detecting object images moving at a different angle or speed than the images of other items in the optic flow field. While angle of motion provides an unambiguous cue that an object is moving relative to other items in the scene, a difference in speed could be due to a difference in the depth of the objects and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects. We found that thresholds for detection of object motion decreased as we increased the number of depth cues available to the observer.

  19. A method of real-time detection for distant moving obstacles by monocular vision

    Science.gov (United States)

    Jia, Bao-zhi; Zhu, Ming

    2013-12-01

    In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.

  20. Detection and Tracking Strategies for Autonomous Aerial Refuelling Tasks Based on Monocular Vision

    Directory of Open Access Journals (Sweden)

    Yingjie Yin

    2014-07-01

    Full Text Available Detection and tracking strategies based on monocular vision are proposed for autonomous aerial refuelling tasks. The drogue attached to the fuel tanker aircraft has two important features. The grey values of the drogue's inner part are different from the external umbrella ribs, as shown in the image. The shape of the drogue's inner dark part is nearly circular. According to crucial prior knowledge, the rough and fine positioning algorithms are designed to detect the drogue. Particle filter based on the drogue's shape is proposed to track the drogue. A strategy to switch between detection and tracking is proposed to improve the robustness of the algorithms. The inner dark part of the drogue is segmented precisely in the detecting and tracking process and the segmented circular part can be used to measure its spatial position. The experimental results show that the proposed method has good performance in real-time and satisfied robustness and positioning accuracy.

  1. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    Directory of Open Access Journals (Sweden)

    Ki-Yeong Park

    2014-01-01

    Full Text Available We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.

  2. Indoor Mobile Robot Navigation by Central Following Based on Monocular Vision

    Science.gov (United States)

    Saitoh, Takeshi; Tada, Naoya; Konishi, Ryosuke

    This paper develops the indoor mobile robot navigation by center following based on monocular vision. In our method, based on the frontal image, two boundary lines between the wall and baseboard are detected. Then, the appearance based obstacle detection is applied. When the obstacle exists, the avoidance or stop movement is worked according to the size and position of the obstacle, and when the obstacle does not exist, the robot moves at the center of the corridor. We developed the wheelchair based mobile robot. We estimated the accuracy of the boundary line detection, and obtained fast processing speed and high detection accuracy. We demonstrate the effectiveness of our mobile robot by the stopping experiments with various obstacles and moving experiments.

  3. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    Science.gov (United States)

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  4. Short-term monocular deprivation strengthens the patched eye's contribution to binocular combination.

    Science.gov (United States)

    Zhou, Jiawei; Clavagnier, Simon; Hess, Robert F

    2013-04-18

    Binocularity is a fundamental property of primate vision. Ocular dominance describes the perceptual weight given to the inputs from the two eyes in their binocular combination. There is a distribution of sensory dominance within the normal binocular population with most subjects having balanced inputs while some are dominated by the left eye and some by the right eye. Using short-term monocular deprivation, the sensory dominance can be modulated as, under these conditions, the patched eye's contribution is strengthened. We address two questions: Is this strengthening a general effect such that it is seen for different types of sensory processing? And is the strengthening specific to pattern deprivation, or does it also occur for light deprivation? Our results show that the strengthening effect is a general finding involving a number of sensory functions, and it occurs as a result of both pattern and light deprivation.

  5. Relationship between monocularly deprivation and amblyopia rats and visual system development

    Institute of Scientific and Technical Information of China (English)

    Yu Ma

    2014-01-01

    Objective:To explore the changes of lateral geniculate body and visual cortex in monocular strabismus and form deprived amblyopic rat, and visual development plastic stage and visual plasticity in adult rats.Methods:A total of60SD rats ages13 d were randomly divided intoA, B,C three groups with20 in each group, groupA was set as the normal control group without any processing, groupB was strabismus amblyopic group, using the unilateral extraocular rectus resection to establish the strabismus amblyopia model, groupC was monocular form deprivation amblyopia group using unilateral eyelid edge resection+ lid suture.At visual developmental early phase(P25), meta phase(P35), late phase(P45) and adult phase(P120), the lateral geniculate body and visual cortex area17 of five rats in each group were exacted forC-fosImmunocytochemistry. Neuron morphological changes in lateral geniculate body and visual cortex was observed, the positive neurons differences ofC-fos expression induced by light stimulation was measured in each group, and the condition of radiation development ofP120 amblyopic adult rats was observed.Results:In groupsB andC,C-fos positive cells were significantly lower thanthe control group atP25(P0.05),C-fos protein positive cells level of groupB was significantly lower than that of groupA(P<0.05).The binoculusC-fos protein positive cells level of groupsB andC were significantly higher than that of control group atP35,P45 andP120 with statistically significant differences(P<0.05).Conclusions:The increasing ofC-fos expression in geniculate body and visual cortex neurons of adult amblyopia suggests the visual cortex neurons exist a certain degree of visual plasticity.

  6. A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM System

    Directory of Open Access Journals (Sweden)

    Antoni Grau

    2013-07-01

    Full Text Available Simultaneous localization and mapping (SLAM is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.

  7. A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.

    Science.gov (United States)

    Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni

    2013-07-03

    Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.

  8. c-FOS expression in the visual system of tree shrews after monocular inactivation.

    Science.gov (United States)

    Takahata, Toru; Kaas, Jon H

    2017-01-01

    Tree shrews possess an unusual segregation of ocular inputs to sublayers rather than columns in the primary visual cortex (V1). In this study, the lateral geniculate nucleus (LGN), superior colliculus (SC), pulvinar, and V1 were examined for changes in c-FOS, an immediate-early gene, expression after 1 or 24 hours of monocular inactivation with tetrodotoxin (TTX) in tree shrews. Monocular inactivation greatly reduced gene expression in LGN layers related to the blocked eye, whereas normally high to moderate levels were maintained in the layers that receive inputs from the intact eye. The SC and caudal pulvinar contralateral to the blocked eye had greatly (SC) or moderately (pulvinar) reduced gene expressions reflective of dependence on the contralateral eye. c-FOS expression in V1 was greatly reduced contralateral to the blocked eye, with most of the expression that remained in upper layer 4a and lower 4b and lower layer 6 regions. In contrast, much of V1 contralateral to the active eye showed normal levels of c-FOS expression, including the inner parts of sublayers 4a and 4b and layers 2, 3, and 6. In some cases, upper layer 4a and lower 4b showed a reduction of gene expression. Layers 5 and sublayer 3c had normally low levels of gene expression. The results reveal the functional dominance of the contralateral eye in activating the SC, pulvinar, and V1, and the results from V1 suggest that the sublaminar organization of layer 4 is more complex than previously realized. J. Comp. Neurol. 525:151-165, 2017. © 2016 Wiley Periodicals, Inc.

  9. Invisible Display in Aluminum

    DEFF Research Database (Denmark)

    Prichystal, Jan Phuklin; Hansen, Hans Nørgaard; Bladt, Henrik Henriksen

    2005-01-01

    Bang & Olufsen a/s has been working with ideas for invisible integration of displays in metal surfaces. Invisible integration of information displays traditionally has been possible by placing displays behind transparent or semitransparent materials such as plastic or glass. The wish for an integ...... be obtained by shining light from the backside of the workpiece. When there is no light from the backside, the front surface seems totally untouched. This was achieved by laser ablation with ultra-short pulses.......Bang & Olufsen a/s has been working with ideas for invisible integration of displays in metal surfaces. Invisible integration of information displays traditionally has been possible by placing displays behind transparent or semitransparent materials such as plastic or glass. The wish...... for an integrated display in a metal surface is often ruled by design and functionality of a product. The integration of displays in metal surfaces requires metal removal in order to clear the area of the display to some extent. The idea behind an invisible display in Aluminum concerns the processing of a metal...

  10. Polyplanar optic display

    Energy Technology Data Exchange (ETDEWEB)

    Veligdan, J.; Biscardi, C.; Brewster, C.; DeSanto, L. [Brookhaven National Lab., Upton, NY (United States). Dept. of Advanced Technology; Beiser, L. [Leo Beiser Inc., Flushing, NY (United States)

    1997-07-01

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 100 milliwatt green solid state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, the authors discuss the electronic interfacing to the DLP{trademark} chip, the opto-mechanical design and viewing angle characteristics.

  11. OLED displays and lighting

    CERN Document Server

    Koden, Mitsuhiro

    2017-01-01

    Organic light-emitting diodes (OLEDs) have emerged as the leading technology for the new display and lighting market. OLEDs are solid-state devices composed of thin films of organic molecules that create light with the application of electricity. OLEDs can provide brighter, crisper displays on electronic devices and use less power than conventional light-emitting diodes (LEDs) or liquid crystal displays (LCDs) used today. This book covers both the fundamentals and practical applications of flat and flexible OLEDs.

  12. Scalable Resolution Display Walls

    KAUST Repository

    Leigh, Jason

    2013-01-01

    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  13. JAVA Stereo Display Toolkit

    Science.gov (United States)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  14. Eliminating accommodation-convergence conflicts in stereoscopic displays: Can multiple-focal-plane displays elicit continuous and consistent vergence and accommodation responses?

    Science.gov (United States)

    MacKenzie, Kevin J.; Watt, Simon J.

    2010-02-01

    Conventional stereoscopic displays present images at a fixed focal distance. Depth variations in the depicted scene therefore result in conflicts between the stimuli to vergence and to accommodation. The resulting decoupling of accommodation and vergence responses can cause adverse consequences, including reduced stereo performance, difficulty fusing binocular images, and fatigue and discomfort. These problems could be eliminated if stereo displays could present correct focus cues. A promising approach to achieving this is to present each eye with a sum of images presented at multiple focal planes, and to approximate continuous variations in focal distance by distributing light energy across image planes - a technique referred to as depth-filtering1. Here we describe a novel multi-plane display in which we can measure accommodation and vergence responses. We report an experiment in which we compare these oculomotor responses to real stimuli and depth-filtered simulations of the same distance. Vergence responses were generally similar across conditions. Accommodation responses to depth-filtered images were inaccurate, however, showing an overshoot of the target, particularly in response to a small step-change in stimulus distance. This is surprising because we have previously shown that blur-driven accommodation to the same stimuli, viewed monocularly, is accurate and reliable. We speculate that an initial convergence-driven accommodation response, in combination with a weaker accommodative stimulus from depth-filtered images, leads to this overshoot. Our results suggest that stereoscopic multi-plane displays can be effective, but require smaller image-plane separations than monocular accommodation responses suggest.

  15. Monocular discs in the occlusion zones of binocular surfaces do not have quantitative depth--a comparison with Panum's limiting case.

    Science.gov (United States)

    Gillam, Barbara; Cook, Michael; Blackburn, Shane

    2003-01-01

    Da Vinci stereopsis is defined as apparent depth seen in a monocular object laterally adjacent to a binocular surface in a position consistent with its occlusion by the other eye. It is widely regarded as a new form of quantitative stereopsis because the depth seen is quantitatively related to the lateral separation of the monocular element and the binocular surface (Nakayama and Shimojo 1990 Vision Research 30 1811-1825). This can be predicted on the basis that the more separated the monocular element is from the surface the greater its minimum depth behind the surface would have to be to account for its monocular occlusion. Supporting evidence, however, has used narrow bars as the monocular elements, raising the possibility that quantitative depth as a function of separation could be attributable to Panum's limiting case (double fusion) rather than to a new form of stereopsis. We compared the depth performance of monocular objects fusible with the edge of the surface in the contralateral eye (lines) and non-fusible objects (disks) and found that, although the fusible objects showed highly quantitative depth, the disks did not, appearing behind the surface to the same degree at all separations from it. These findings indicate that, although there is a crude sense of depth for discrete monocular objects placed in a valid position for uniocular occlusion, depth is not quantitative. They also indicate that Panum's limiting case is not, as has sometimes been claimed, itself a case of da Vinci stereopsis since fusibility is a critical factor for seeing quantitative depth in discrete monocular objects relative to a binocular surface.

  16. Transposição monocular vertical dos músculos retos horizontais em pacientes esotrópicos portadores de anisotropia em A Monocular vertical displacement of the horizontal rectus muscles in esotropic patients with "A" pattern

    Directory of Open Access Journals (Sweden)

    Ana Carolina Toledo Dias

    2004-10-01

    Full Text Available OBJETIVO: Estudar a eficácia da transposição vertical monocular dos mús-culos retos horizontais, proposta por Goldstein, em pacientes esotrópicos portadores de anisotropia em A, sem hiperfunção de músculos oblíquos. MÉTODOS: Foram analisados, retrospectivamente, 23 prontuários de pacientes esotrópicos portadores de anisotropia em A > 10delta, submetidos a transposição vertical monocular dos músculos retos horizontais. Os pacientes foram divididos em 2 grupos, de acordo com a magnitude da incomitância pré-operatória; grupo 1 era composto de pacientes com desvio entre 11delta e 20delta e grupo 2 entre 21delta e 30delta. Foram considerados co-mo resultados satisfatórios as correções com A PURPOSE: To report the effectiveness of the vertical monocular displacement of the horizontal rectus muscles, proposed by Goldstein, in esotropic patients with A pattern, without oblique muscle overaction. METHODS: A retrospective study was performed using the charts of 23 esotropic patients with A pattern > 10delta, submitted to vertical monocular displacement of the horizontal rectus muscles. The patients were divided into 2 groups in agreement with the magnitude of the preoperative deviation, group 1 (11delta to 20delta and group 2 (21delta to 30delta. Satisfactory results were considered when corrections A < 10delta or V < 15delta were obtained. RESULTS: The average of absolute correction was, in group 1, 16.5delta and, in group 2, 16.6delta. In group 1, 91.6% of the patients presented satisfactory surgical results and in group 2, 81.8% (p = 0.468. CONCLUSION: The surgical procedure, proposed by Goldstein, is effective and there was no statistical difference between the magnitude of the preoperative anisotropia and the obtained correction.

  17. 单目视觉同步定位与地图创建方法综述%A survey of monocular simultaneous localization and mapping

    Institute of Scientific and Technical Information of China (English)

    顾照鹏; 刘宏

    2015-01-01

    随着计算机视觉技术的发展,基于单目视觉的同步定位与地图创建( monocular SLAM)逐渐成为计算机视觉领域的热点问题之一。介绍了单目视觉SLAM方法的分类,从视觉特征检测与匹配、数据关联的优化、特征点深度的获取、地图的尺度控制几个方面阐述了单目视觉SLAM研究的发展现状。最后,介绍了常见的单目视觉与其他传感器结合的SLAM方法,并探讨了单目视觉SLAM未来的研究方向。%With the development of computer vision technology, monocular simultaneous localization and mapping ( monocular SLAM) has gradually become one of the hot issues in the field of computer vision.This paper intro-duces the monocular vision SLAM classification that relates to the present status of research in monocular SLAM methods from several aspects, including visual feature detection and matching, optimization of data association, depth acquisition of feature points, and map scale control.Monocular SLAM methods combining with other sensors are reviewed and significant issues needing further study are discussed.

  18. X'tal Visor : 頭部搭載型小型プロジェクタの設計と評価(「VRにおける画像処理技術」特集)

    National Research Council Canada - National Science Library

    山崎, 潤; 園田, 哲理; 吉田, 匠; 川上, 直樹; 舘, 〓

    2007-01-01

    .... Conventionally, several Head-Mounted Displays are developed as AR (Augmented Reality) display. However, most of these displays cover an user's eyes, and consequently generate low realistic sensation...

  19. Embedding perspective cue in holographic projection display by virtual variable-focal-length lenses

    Science.gov (United States)

    Li, Zhaohui; Zhang, Jianqi; Wang, Xiaorui; Zhao, Fuliang

    2014-10-01

    To make a view perspective cue emerging in reconstructed images, a new approach is proposed by incorporating virtual variable-focal-length lenses into computer generated Fourier hologram (CGFH). This approach is based on a combination of monocular vision principle and digital hologram display, thus it owns properties coming from the two display models simultaneously. Therefore, it can overcome the drawback of the unsatisfied visual depth perception of the reconstructed three-dimensional (3D) images in holographic projection display (HPD). Firstly, an analysis on characteristics of conventional CGFH reconstruction is made, which indicates that a finite depthof- focus and a non-adjustable lateral magnification are reasons of the depth information lack on a fixed image plane. Secondly, the principle of controlling lateral magnification in wave-front reconstructions by virtual lenses is demonstrated. And the relation model is deduced, involving the depth of object, the parameters of virtual lenses, and the lateral magnification. Next, the focal-lengths of virtual lenses are determined by considering perspective distortion of human vision. After employing virtual lenses in the CGFH, the reconstructed image on focal-plane can deliver the same depth cues as that of the monocular stereoscopic image. Finally, the depthof- focus enhancement produced by a virtual lens and the effect on the reconstruction quality from the virtual lens are described. Numerical simulation and electro-optical reconstruction experimental results prove that the proposed algorithm can improve the depth perception of the reconstructed 3D image in HPD. The proposed method provides a possibility of uniting multiple display models to enhance 3D display performance and viewer experience.

  20. Standardizing visual display quality

    NARCIS (Netherlands)

    Besuijen, Ko; Spenkelink, Gerd P.J.

    1998-01-01

    The current ISO 9241–3 standard for visual display quality and the proposed user performance tests are reviewed. The standard is found to be more engineering than ergonomic and problems with system configuration, software applications, display settings, user behaviour, wear and physical environment

  1. Polyplanar optical display electronics

    Energy Technology Data Exchange (ETDEWEB)

    DeSanto, L.; Biscardi, C. [Brookhaven National Lab., Upton, NY (United States). Dept. of Advanced Technology

    1997-07-01

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a 100 milliwatt green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments. In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMD{trademark}) circuit board is removed from the Texas Instruments DLP light engine assembly. Due to the compact architecture of the projection system within the display chassis, the DMD{trademark} chip is operated remotely from the Texas Instruments circuit board. The authors discuss the operation of the DMD{trademark} divorced from the light engine and the interfacing of the DMD{trademark} board with various video formats (CVBS, Y/C or S-video and RGB) including the format specific to the B-52 aircraft. A brief discussion of the electronics required to drive the laser is also presented.

  2. Visual merchandising window display

    Directory of Open Access Journals (Sweden)

    Opris (Cas. Stanila M.

    2013-12-01

    Full Text Available Window display plays a major part in the selling strategies; it does not only include the simple display of goods, nowadays it is a form of art, also having the purpose of sustaining the brand image. This article wants to reveal the tools that are essential in creating a fabulous window display. Being a window designer is not an easy job, you have to always think ahead trends, to have a sense of colour, to know how to use light to attract customers in the store after only one glance at the window. The big store window displays are theatre scenes: with expensive backgrounds, special effects and high fashion mannequins. The final role of the displays is to convince customers to enter the store and trigger the purchasing act which is the final goal of the retail activity.

  3. Defense display market assessment

    Science.gov (United States)

    Desjardins, Daniel D.; Hopper, Darrel G.

    1998-09-01

    This paper addresses the number, function and size of principal military displays and establishes a basis to determine the opportunities for technology insertion in the immediate future and into the next millennium. Principal military displays are defined as those occupying appreciable crewstation real-estate and/or those without which the platform could not carry out its intended mission. DoD 'office' applications are excluded from this study. The military displays market is specified by such parameters as active area and footprint size, and other characteristics such as luminance, gray scale, resolution, angle, color, video capability, and night vision imaging system (NVIS) compatibility. Funded, future acquisitions, planned and predicted crewstation modification kits, and form-fit upgrades are taken into account. This paper provides an overview of the DoD niche market, allowing both government and industry a necessary reference by which to meet DoD requirements for military displays in a timely and cost-effective manner. The aggregate DoD market for direct-view and large-area military displays is presently estimated to be in excess of 242,000. Miniature displays are those which must be magnified to be viewed, involve a significantly different manufacturing paradigm and are used in helmet mounted displays and thermal weapon sight applications. Some 114,000 miniature displays are presently included within Service weapon system acquisition plans. For vendor production planning purposes it is noted that foreign military sales could substantially increase these quantities. The vanishing vendor syndrome (VVS) for older display technologies continues to be a growing, pervasive problem throughout DoD, which consequently must leverage the more modern display technologies being developed for civil- commercial markets.

  4. Capturing age-related changes in functional contrast sensitivity with decreasing light levels in monocular and binocular vision

    OpenAIRE

    Gillespie-Gallery, H.; Konstantakopoulou, E.; HARLOW, J.A.; Barbur, J. L.

    2013-01-01

    Purpose: It is challenging to separate the effects of normal aging of the retina and visual pathways independently from optical factors, decreased retinal illuminance and early stage disease. This study determined limits to describe the effect of light level on normal, age-related changes in monocular and binocular functional contrast sensitivity. Methods: 95 participants aged 20 to 85 were recruited. Contrast thresholds for correct orientation discrimination of the gap in a Landolt C opt...

  5. Disambiguation of Necker cube rotation by monocular and binocular depth cues: relative effectiveness for establishing long-term bias.

    Science.gov (United States)

    Harrison, Sarah J; Backus, Benjamin T; Jain, Anshul

    2011-05-11

    The apparent direction of rotation of perceptually bistable wire-frame (Necker) cubes can be conditioned to depend on retinal location by interleaving their presentation with cubes that are disambiguated by depth cues (Haijiang, Saunders, Stone, & Backus, 2006; Harrison & Backus, 2010a). The long-term nature of the learned bias is demonstrated by resistance to counter-conditioning on a consecutive day. In previous work, either binocular disparity and occlusion, or a combination of monocular depth cues that included occlusion, internal occlusion, haze, and depth-from-shading, were used to control the rotation direction of disambiguated cubes. Here, we test the relative effectiveness of these two sets of depth cues in establishing the retinal location bias. Both cue sets were highly effective in establishing a perceptual bias on Day 1 as measured by the perceived rotation direction of ambiguous cubes. The effect of counter-conditioning on Day 2, on perceptual outcome for ambiguous cubes, was independent of whether the cue set was the same or different as Day 1. This invariance suggests that a common neural population instantiates the bias for rotation direction, regardless of the cue set used. However, in a further experiment where only disambiguated cubes were presented on Day 1, perceptual outcome of ambiguous cubes during Day 2 counter-conditioning showed that the monocular-only cue set was in fact more effective than disparity-plus-occlusion for causing long-term learning of the bias. These results can be reconciled if the conditioning effect of Day 1 ambiguous trials in the first experiment is taken into account (Harrison & Backus, 2010b). We suggest that monocular disambiguation leads to stronger bias either because it more strongly activates a single neural population that is necessary for perceiving rotation, or because ambiguous stimuli engage cortical areas that are also engaged by monocularly disambiguated stimuli but not by disparity-disambiguated stimuli

  6. Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues.

    Science.gov (United States)

    Warren, Paul A; Rushton, Simon K

    2009-05-01

    We have recently suggested that the brain uses its sensitivity to optic flow in order to parse retinal motion into components arising due to self and object movement (e.g. Rushton, S. K., & Warren, P. A. (2005). Moving observers, 3D relative motion and the detection of object movement. Current Biology, 15, R542-R543). Here, we explore whether stereo disparity is necessary for flow parsing or whether other sources of depth information, which could theoretically constrain flow-field interpretation, are sufficient. Stationary observers viewed large field of view stimuli containing textured cubes, moving in a manner that was consistent with a complex observer movement through a stationary scene. Observers made speeded responses to report the perceived direction of movement of a probe object presented at different depths in the scene. Across conditions we varied the presence or absence of different binocular and monocular cues to depth order. In line with previous studies, results consistent with flow parsing (in terms of both perceived direction and response time) were found in the condition in which motion parallax and stereoscopic disparity were present. Observers were poorer at judging object movement when depth order was specified by parallax alone. However, as more monocular depth cues were added to the stimulus the results approached those found when the scene contained stereoscopic cues. We conclude that both monocular and binocular static depth information contribute to flow parsing. These findings are discussed in the context of potential architectures for a model of the flow parsing mechanism.

  7. Panoramic projection avionics displays

    Science.gov (United States)

    Kalmanash, Michael H.

    2003-09-01

    Avionics projection displays are entering production in advanced tactical aircraft. Early adopters of this technology in the avionics community used projection displays to replace or upgrade earlier units incorporating direct-view CRT or AMLCD devices. Typical motivation for these upgrades were the alleviation of performance, cost and display device availability concerns. In these systems, the upgraded (projection) displays were one-for-one form / fit replacements for the earlier units. As projection technology has matured, this situation has begun to evolve. The Lockheed-Martin F-35 is the first program in which the cockpit has been specifically designed to take advantage of one of the more unique capabilities of rear projection display technology, namely the ability to replace multiple small screens with a single large conformal viewing surface in the form of a panoramic display. Other programs are expected to follow, since the panoramic formats enable increased mission effectiveness, reduced cost and greater information transfer to the pilot. Some of the advantages and technical challenges associated with panoramic projection displays for avionics applications are described below.

  8. Quality of life in patients with age-related macular degeneration with monocular and binocular legal blindness Qualidade de vida de pacientes com degeneração macular relacionada à idade com cegueira legal monocular e binocular

    Directory of Open Access Journals (Sweden)

    Roberta Ferrari Marback

    2007-01-01

    Full Text Available OBJECTIVE: To evaluate the quality of life for persons affected by age-related macular degeneration that results in monocular or binocular legal blindness. METHODS: An analytic transversal study using the National Eye Institute Visual Functioning Questionnaire (NEI VFQ-25 was performed. Inclusion criteria were persons of both genders, aged more than 50 years old, absence of cataracts, diagnosis of age-related monocular degeneration in at least one eye and the absence of other macular diseases. The control group was paired by sex, age and no ocular disease. RESULTS: Group 1 (monocular legal blindness was composed of 54 patients (72.22% females and 27.78% males, aged 51 to 87 years old, medium age 74.61 ± 7.27 years; group 2 (binocular legal blindness was composed of 54 patients (46.30% females and 53.70% males aged 54 to 87 years old, medium age 75.61 ± 6.34 years. The control group was composed of 40 patients (40% females and 60% males, aged 50 to 81 years old, medium age 65.65 ± 7.56 years. The majority of the scores were statistically significantly higher in group 1 and the control group in relation to group 2 and higher in the control group when compared to group 1. CONCLUSIONS: It was evident that the quality of life of persons with binocular blindness was more limited in relation to persons with monocular blindness. Both groups showed significant impairment in quality of life when compared to normal persons.OBJETIVO: Avaliar a qualidade de vida de portadores de degeneração macular relacionada à idade com cegueira legal monocular e binocular. MÉTODOS: Foi realizado estudo transversal analítico por meio do questionário National Eye Institute Visual Functioning Questionnaire (NEI VFQ-25. Os critérios de inclusão foram: indivíduos de ambos os sexos, idade maior que 50 anos, ausência de catarata, diagnóstico de degeneração macular relacionada à idade avançada em pelo menos um dos olhos, sem outras maculopatias. O Grupo Controle

  9. Visual perceptual issues of the integrated helmet and display sighting system (IHADSS): four expert perspectives

    Science.gov (United States)

    Rash, Clarence E.; Heinecke, Kevin; Francis, Gregory; Hiatt, Keith L.

    2008-04-01

    The Integrated Helmet and Display Sighting System (IHADSS) helmet-mounted display (HMD) has been flown for over a quarter of a century on the U.S. Army's AH-64 Apache Attack Helicopter. The aircraft's successful deployment in both peacetime and combat has validated the original design concept for the IHADSS HMD. During its 1970s development phase, a number of design issues were identified as having the potential of introducing visual perception problems for aviators. These issues include monocular design, monochromatic imagery, reduced field-of-view (FOV), sensor spectrum, reduced resolution (effective visual acuity), and displaced visual input eye point. From their diverse perspectives, a panel of four experts - an HMD researcher, a cognitive psychologist, a flight surgeon, and a veteran AH-64 aviator - discuss the impact of the design issues on visual perception and related performance.

  10. Small - Display Cartography

    DEFF Research Database (Denmark)

    Nissen, Flemming; Hvas, Anders; Münster-Swendsen, Jørgen

    This report comprises the work carried out in the work-package of small display cartography. The work-package has aimed at creating a general framework for the small-display cartography. A solid framework facilitates an increased use of spatial data in mobile devices - thus enabling, together...... with the rapidly evolving positioning techniques, a new category of position-dependent, map-based services to be introduced. The report consists of the following parts: Part I: Categorization of handheld devices, Part II: Cartographic design for small-display devices, Part III: Study on the GiMoDig Client ? Portal...... Service Communication and finally, Part IV: Concluding remarks and topics for further research on small-display cartography. Part II includes a separate Appendix D consisting of a cartographic design specification. Part III includes a separate Appendix C consisting of a schema specification, a separate...

  11. Flexible displays, rigid designs?

    DEFF Research Database (Denmark)

    Hornbæk, Kasper

    2015-01-01

    Rapid technological progress has enabled a wide range of flexible displays for computing devices, but the user experience--which we're only beginning to understand--will be the key driver for successful designs....

  12. Monocular and binocular steady-state flicker VEPs: frequency-response functions to sinusoidal and square-wave luminance modulation.

    Science.gov (United States)

    Nicol, David S; Hamilton, Ruth; Shahani, Uma; McCulloch, Daphne L

    2011-02-01

    Steady-state VEPs to full-field flicker (FFF) using sinusoidally modulated light were compared with those elicited by square-wave modulated light across a wide range of stimulus frequencies with monocular and binocular FFF stimulation. Binocular and monocular VEPs were elicited in 12 adult volunteers to FFF with two modes of temporal modulation: sinusoidal or square-wave (abrupt onset and offset, 50% duty cycle) at ten temporal frequencies ranging from 2.83 to 58.8 Hz. All stimuli had a mean luminance of 100 cd/m(2) with an 80% modulation depth (20-180 cd/m(2)). Response magnitudes at the stimulus frequency (F1) and at the double and triple harmonics (F2 and F3) were compared. For both sinusoidal and square-wave flicker, the FFF-VEP magnitudes at F1 were maximal for 7.52 Hz flicker. F2 was maximal for 5.29 Hz flicker, and F3 magnitudes are largest for flicker stimulation from 3.75 to 7.52 Hz. Square-wave flicker produced significantly larger F1 and F2 magnitudes for slow flicker rates (up to 5.29 Hz for F1; at 2.83 and 3.75 Hz for F2). The F3 magnitudes were larger overall for square-wave flicker. Binocular FFF-VEP magnitudes are larger than those of monocular FFF-VEPs, and the amount of this binocular enhancement is not dependant on the mode of flicker stimulation (mean binocular: monocular ratio 1.41, 95% CI: 1.2-1.6). Binocular enhancement of F1 for 21.3 Hz flicker was increased to a factor of 2.5 (95% CI: 1.8-3.5). In the healthy adult visual system, FFF-VEP magnitudes can be characterized by the frequency-response functions of F1, F2 and F3. Low-frequency roll-off in the FFF-VEP magnitudes is greater for sinusoidal flicker than for square-wave flicker for rates ≤ 5.29 Hz; magnitudes for higher-frequency flicker are similar for the two types of flicker. Binocular FFF-VEPs are larger overall than those recorded monocularly, and this binocular summation is enhanced at 21.3 Hz in the mid-frequency range.

  13. Liquid Crystal Airborne Display

    Science.gov (United States)

    1977-08-01

    81/2X 11- 10 -9 .8 display using a large advertising alphanimeric ( TCI ) has been added to the front of the optical box used in the F-4 aircraft for HUD...properties over a wide range of tempera - tures, including normal room temperature. What are Liquid Crystals? Liquid crystals have been classified in three...natic fanctions and to present data needed for the semi- automatic and manual control of system functions. Existing aircraft using CRT display

  14. Military display performance parameters

    Science.gov (United States)

    Desjardins, Daniel D.; Meyer, Frederick

    2012-06-01

    The military display market is analyzed in terms of four of its segments: avionics, vetronics, dismounted soldier, and command and control. Requirements are summarized for a number of technology-driving parameters, to include luminance, night vision imaging system compatibility, gray levels, resolution, dimming range, viewing angle, video capability, altitude, temperature, shock and vibration, etc., for direct-view and virtual-view displays in cockpits and crew stations. Technical specifications are discussed for selected programs.

  15. Raster graphics display library

    Science.gov (United States)

    Grimsrud, Anders; Stephenson, Michael B.

    1987-01-01

    The Raster Graphics Display Library (RGDL) is a high level subroutine package that give the advanced raster graphics display capabilities needed. The RGDL uses FORTRAN source code routines to build subroutines modular enough to use as stand-alone routines in a black box type of environment. Six examples are presented which will teach the use of RGDL in the fastest, most complete way possible. Routines within the display library that are used to produce raster graphics are presented in alphabetical order, each on a separate page. Each user-callable routine is described by function and calling parameters. All common blocks that are used in the display library are listed and the use of each variable within each common block is discussed. A reference on the include files that are necessary to compile the display library is contained. Each include file and its purpose are listed. The link map for MOVIE.BYU version 6, a general purpose computer graphics display system that uses RGDL software, is also contained.

  16. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles.

    Science.gov (United States)

    Huang, Kuo-Lung; Chiu, Chung-Cheng; Chiu, Sheng-Yi; Teng, Yao-Jen; Hao, Shu-Sheng

    2015-07-13

    The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs) has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft's nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft's nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  17. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  18. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    Science.gov (United States)

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  19. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Kuo-Lung Huang

    2015-07-01

    Full Text Available The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  20. A Height Estimation Approach for Terrain Following Flights from Monocular Vision

    Directory of Open Access Journals (Sweden)

    Igor S. G. Campos

    2016-12-01

    Full Text Available In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.

  1. A Height Estimation Approach for Terrain Following Flights from Monocular Vision.

    Science.gov (United States)

    Campos, Igor S G; Nascimento, Erickson R; Freitas, Gustavo M; Chaimowicz, Luiz

    2016-12-06

    In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.

  2. RBF-Based Monocular Vision Navigation for Small Vehicles in Narrow Space below Maize Canopy

    Directory of Open Access Journals (Sweden)

    Lu Liu

    2016-06-01

    Full Text Available Maize is one of the major food crops in China. Traditionally, field operations are done by manual labor, where the farmers are threatened by the harsh environment and pesticides. On the other hand, it is difficult for large machinery to maneuver in the field due to limited space, particularly in the middle and late growth stage of maize. Unmanned, compact agricultural machines, therefore, are ideal for such field work. This paper describes a method of monocular visual recognition to navigate small vehicles between narrow crop rows. Edge detection and noise elimination were used for image segmentation to extract the stalks in the image. The stalk coordinates define passable boundaries, and a simplified radial basis function (RBF-based algorithm was adapted for path planning to improve the fault tolerance of stalk coordinate extraction. The average image processing time, including network latency, is 220 ms. The average time consumption for path planning is 30 ms. The fast processing ensures a top speed of 2 m/s for our prototype vehicle. When operating at the normal speed (0.7 m/s, the rate of collision with stalks is under 6.4%. Additional simulations and field tests further proved the feasibility and fault tolerance of our method.

  3. Maximum Likelihood Estimation of Monocular Optical Flow Field for Mobile Robot Ego-motion

    Directory of Open Access Journals (Sweden)

    Huajun Liu

    2016-01-01

    Full Text Available This paper presents an optimized scheme of monocular ego-motion estimation to provide location and pose information for mobile robots with one fixed camera. First, a multi-scale hyper-complex wavelet phase-derived optical flow is applied to estimate micro motion of image blocks. Optical flow computation overcomes the difficulties of unreliable feature selection and feature matching of outdoor scenes; at the same time, the multi-scale strategy overcomes the problem of road surface self-similarity and local occlusions. Secondly, a support probability of flow vector is defined to evaluate the validity of the candidate image motions, and a Maximum Likelihood Estimation (MLE optical flow model is constructed based not only on image motion residuals but also their distribution of inliers and outliers, together with their support probabilities, to evaluate a given transform. This yields an optimized estimation of inlier parts of optical flow. Thirdly, a sampling and consensus strategy is designed to estimate the ego-motion parameters. Our model and algorithms are tested on real datasets collected from an intelligent vehicle. The experimental results demonstrate the estimated ego-motion parameters closely follow the GPS/INS ground truth in complex outdoor road scenarios.

  4. Acute Myeloid Leukemia Relapse Presenting as Complete Monocular Vision Loss due to Optic Nerve Involvement

    Directory of Open Access Journals (Sweden)

    Shyam A. Patel

    2016-01-01

    Full Text Available Acute myeloid leukemia (AML involvement of the central nervous system is relatively rare, and detection of leptomeningeal disease typically occurs only after a patient presents with neurological symptoms. The case herein describes a 48-year-old man with relapsed/refractory AML of the mixed lineage leukemia rearrangement subtype, who presents with monocular vision loss due to leukemic eye infiltration. MRI revealed right optic nerve sheath enhancement and restricted diffusion concerning for nerve ischemia and infarct from hypercellularity. Cerebrospinal fluid (CSF analysis showed a total WBC count of 81/mcl with 96% AML blasts. The onset and progression of visual loss were in concordance with rise in peripheral blood blast count. A low threshold for diagnosis of CSF involvement should be maintained in patients with hyperleukocytosis and high-risk cytogenetics so that prompt treatment with whole brain radiation and intrathecal chemotherapy can be delivered. This case suggests that the eye, as an immunoprivileged site, may serve as a sanctuary from which leukemic cells can resurge and contribute to relapsed disease in patients with high-risk cytogenetics.

  5. Cross-Covariance Estimation for Ekf-Based Inertial Aided Monocular Slam

    Science.gov (United States)

    Kleinert, M.; Stilla, U.

    2011-04-01

    Repeated observation of several characteristically textured surface elements allows the reconstruction of the camera trajectory and a sparse point cloud which is often referred to as "map". The extended Kalman filter (EKF) is a popular method to address this problem, especially if real-time constraints have to be met. Inertial measurements as well as a parameterization of the state vector that conforms better to the linearity assumptions made by the EKF may be employed to reduce the impact of linearization errors. Therefore, we adopt an inertial-aided monocular SLAM approach where landmarks are parameterized in inverse depth w.r.t. the coordinate system in which they were observed for the first time. In this work we present a method to estimate the cross-covariances between landmarks which are introduced in the EKF state vector for the first time and the old filter state that can be applied in the special case at hand where each landmark is parameterized w.r.t. an individual coordinate system.

  6. 3D Reconstruction from a Single Still Image Based on Monocular Vision of an Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available we propose a framework of combining Machine Learning with Dynamic Optimization for reconstructing scene in 3D automatically from a single still image of unstructured outdoor environment based on monocular vision of an uncalibrated camera. After segmenting image first time, a kind of searching tree strategy based on Bayes rule is used to identify the hierarchy of all areas on occlusion. After superpixel segmenting image second time, the AdaBoost algorithm is applied in the integration detection to the depth of lighting, texture and material. Finally, all the factors above are optimized with constrained conditions, acquiring the whole depthmap of an image. Integrate the source image with its depthmap in point-cloud or bilinear interpolation styles, realizing 3D reconstruction. Experiment in comparisons with typical methods in associated database demonstrates our method improves the reasonability of estimation to the overall 3D architecture of image’s scene to a certain extent. And it does not need any manual assist and any camera model information.

  7. Exploiting Depth From Single Monocular Images for Object Detection and Semantic Segmentation

    Science.gov (United States)

    Cao, Yuanzhouhan; Shen, Chunhua; Shen, Heng Tao

    2017-02-01

    Augmenting RGB data with measured depth has been shown to improve the performance of a range of tasks in computer vision including object detection and semantic segmentation. Although depth sensors such as the Microsoft Kinect have facilitated easy acquisition of such depth information, the vast majority of images used in vision tasks do not contain depth information. In this paper, we show that augmenting RGB images with estimated depth can also improve the accuracy of both object detection and semantic segmentation. Specifically, we first exploit the recent success of depth estimation from monocular images and learn a deep depth estimation model. Then we learn deep depth features from the estimated depth and combine with RGB features for object detection and semantic segmentation. Additionally, we propose an RGB-D semantic segmentation method which applies a multi-task training scheme: semantic label prediction and depth value regression. We test our methods on several datasets and demonstrate that incorporating information from estimated depth improves the performance of object detection and semantic segmentation remarkably.

  8. Why is binocular rivalry uncommon? Discrepant monocular images in the real world

    Directory of Open Access Journals (Sweden)

    Derek Henry Arnold

    2011-10-01

    Full Text Available When different images project to corresponding points in the two eyes they can instigate a phenomenon called binocular rivalry (BR, wherein each image seems to intermittently disappear such that only one of the two images is seen at a time. Cautious readers may have noted an important caveat in the opening sentence – this situation can instigate BR, but usually it doesn’t. Unmatched monocular images are frequently encountered in daily life due to either differential occlusions of the two eyes or because of selective obstructions of just one eye, but this does not tend to induce BR. Here I will explore the reasons for this and discuss implications for BR in general. It will be argued that BR is resolved in favour of the instantaneously stronger neural signal, and that this process is driven by an adaptation that enhances the visibility of distant fixated objects over that of more proximate obstructions of an eye. Accordingly, BR would reflect the dynamics of an inherently visual operation that usually deals with real-world constraints.

  9. A Height Estimation Approach for Terrain Following Flights from Monocular Vision

    Science.gov (United States)

    Campos, Igor S. G.; Nascimento, Erickson R.; Freitas, Gustavo M.; Chaimowicz, Luiz

    2016-01-01

    In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80% for positives and 90% for negatives, while the height estimation algorithm presented good accuracy. PMID:27929424

  10. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    Directory of Open Access Journals (Sweden)

    Tae-Jae Lee

    2016-03-01

    Full Text Available This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.

  11. Development of an indoor positioning and navigation system using monocular SLAM and IMU

    Science.gov (United States)

    Mai, Yu-Ching; Lai, Ying-Chih

    2016-07-01

    The positioning and navigation systems based on Global Positioning System (GPS) have been developed over past decades and have been widely used for outdoor environment. However, high-rise buildings or indoor environments can block the satellite signal. Therefore, many indoor positioning methods have been developed to respond to this issue. In addition to the distance measurements using sonar and laser sensors, this study aims to develop a method by integrating a monocular simultaneous localization and mapping (MonoSLAM) algorithm with an inertial measurement unit (IMU) to build an indoor positioning system. The MonoSLAM algorithm measures the distance (depth) between the image features and the camera. With the help of Extend Kalman Filter (EKF), MonoSLAM can provide real-time position, velocity and camera attitude in world frame. Since the feature points will not always appear and can't be trusted at any time, a wrong estimation of the features will cause the estimated position diverge. To overcome this problem, a multisensor fusion algorithm was applied in this study by using the multi-rate Kalman Filter. Finally, from the experiment results, the proposed system was verified to be able to improve the reliability and accuracy of the MonoSLAM by integrating the IMU measurements.

  12. Dynamic plasmonic colour display

    Science.gov (United States)

    Duan, Xiaoyang; Kamin, Simon; Liu, Na

    2017-02-01

    Plasmonic colour printing based on engineered metasurfaces has revolutionized colour display science due to its unprecedented subwavelength resolution and high-density optical data storage. However, advanced plasmonic displays with novel functionalities including dynamic multicolour printing, animations, and highly secure encryption have remained in their infancy. Here we demonstrate a dynamic plasmonic colour display technique that enables all the aforementioned functionalities using catalytic magnesium metasurfaces. Controlled hydrogenation and dehydrogenation of the constituent magnesium nanoparticles, which serve as dynamic pixels, allow for plasmonic colour printing, tuning, erasing and restoration of colour. Different dynamic pixels feature distinct colour transformation kinetics, enabling plasmonic animations. Through smart material processing, information encoded on selected pixels, which are indiscernible to both optical and scanning electron microscopies, can only be read out using hydrogen as a decoding key, suggesting a new generation of information encryption and anti-counterfeiting applications.

  13. TrkA activation in the rat visual cortex by antirat trkA IgG prevents the effect of monocular deprivation.

    Science.gov (United States)

    Pizzorusso, T; Berardi, N; Rossi, F M; Viegi, A; Venstrom, K; Reichardt, L F; Maffei, L

    1999-01-01

    It has been recently shown that intraventricular injections of nerve growth factor (NGF) prevent the effects of monocular deprivation in the rat. We have tested the localization and the molecular nature of the NGF receptor(s) responsible for this effect by activating cortical trkA receptors in monocularly deprived rats by cortical infusion of a specific agonist of NGF on trkA, the bivalent antirat trkA IgG (RTA-IgG). TrkA protein was detected by immunoblot in the rat visual cortex during the critical period. Rats were monocularly deprived for 1 week (P21-28) and RTA-IgG or control rabbit IgG were delivered by osmotic minipumps. The effects of monocular deprivation on the ocular dominance of visual cortical neurons were assessed by extracellular single cell recordings. We found that the shift towards the ipsilateral, non-deprived eye was largely prevented by RTA-IgG. Infusion of RTA-IgG combined with antibody that blocks p75NTR (REX), slightly reduced RTA-IgG effectiveness in preventing monocular deprivation effects. These results suggest that NGF action in visual cortical plasticity is mediated by cortical TrkA receptors with p75NTR exerting a facilitatory role.

  14. Refreshing Refreshable Braille Displays.

    Science.gov (United States)

    Russomanno, Alexander; O'Modhrain, Sile; Gillespie, R Brent; Rodger, Matthew W M

    2015-01-01

    The increased access to books afforded to blind people via e-publishing has given them long-sought independence for both recreational and educational reading. In most cases, blind readers access materials using speech output. For some content such as highly technical texts, music, and graphics, speech is not an appropriate access modality as it does not promote deep understanding. Therefore blind braille readers often prefer electronic braille displays. But, these are prohibitively expensive. The search is on, therefore, for a low-cost refreshable display that would go beyond current technologies and deliver graphical content as well as text. And many solutions have been proposed, some of which reduce costs by restricting the number of characters that can be displayed, even down to a single braille cell. In this paper, we demonstrate that restricting tactile cues during braille reading leads to poorer performance in a letter recognition task. In particular, we show that lack of sliding contact between the fingertip and the braille reading surface results in more errors and that the number of errors increases as a function of presentation speed. These findings suggest that single cell displays which do not incorporate sliding contact are likely to be less effective for braille reading.

  15. Virtual Auditory Displays

    Science.gov (United States)

    2000-01-01

    timbre , intensity, distance, room modeling, radio communication Virtual Environments Handbook Chapter 4 Virtual Auditory Displays Russell D... musical note “A” as a pure sinusoid, there will be 440 condensations and rarefactions per second. The distance between two adjacent condensations or...and complexity are pitch, loudness, and timbre respectively. This distinction between physical and perceptual measures of sound properties is an

  16. Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs

    Science.gov (United States)

    Al-Kaff, Abdulla; García, Fernando; Martín, David; De La Escalera, Arturo; Armingol, José María

    2017-01-01

    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works. PMID:28481277

  17. Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs.

    Science.gov (United States)

    Al-Kaff, Abdulla; García, Fernando; Martín, David; De La Escalera, Arturo; Armingol, José María

    2017-05-07

    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works.

  18. Joint optic disc and cup boundary extraction from monocular fundus images.

    Science.gov (United States)

    Chakravarty, Arunava; Sivaswamy, Jayanthi

    2017-08-01

    Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Monocular distance estimation with optical flow maneuvers and efference copies: a stability-based strategy.

    Science.gov (United States)

    de Croon, Guido C H E

    2016-01-07

    The visual cue of optical flow plays an important role in the navigation of flying insects, and is increasingly studied for use by small flying robots as well. A major problem is that successful optical flow control seems to require distance estimates, while optical flow is known to provide only the ratio of velocity to distance. In this article, a novel, stability-based strategy is proposed for monocular distance estimation, relying on optical flow maneuvers and knowledge of the control inputs (efference copies). It is shown analytically that given a fixed control gain, the stability of a constant divergence control loop only depends on the distance to the approached surface. At close distances, the control loop starts to exhibit self-induced oscillations. The robot can detect these oscillations and hence be aware of the distance to the surface. The proposed stability-based strategy for estimating distances has two main attractive characteristics. First, self-induced oscillations can be detected robustly by the robot and are hardly influenced by wind. Second, the distance can be estimated during a zero divergence maneuver, i.e., around hover. The stability-based strategy is implemented and tested both in simulation and on board a Parrot AR drone 2.0. It is shown that the strategy can be used to: (1) trigger a final approach response during a constant divergence landing with fixed gain, (2) estimate the distance in hover, and (3) estimate distances during an entire landing if the robot uses adaptive gain control to continuously stay on the 'edge of oscillation.'

  20. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss.

    Science.gov (United States)

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960's on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research.

  1. Virtual acoustic displays

    Science.gov (United States)

    Wenzel, Elizabeth M.

    1991-01-01

    A 3D auditory display can potentially enhance information transfer by combining directional and iconic information in a quite naturalistic representation of dynamic objects in the interface. Another aspect of auditory spatial clues is that, in conjunction with other modalities, it can act as a potentiator of information in the display. For example, visual and auditory cues together can reinforce the information content of the display and provide a greater sense of presence or realism in a manner not readily achievable by either modality alone. This phenomenon will be particularly useful in telepresence applications, such as advanced teleconferencing environments, shared electronic workspaces, and monitoring telerobotic activities in remote or hazardous situations. Thus, the combination of direct spatial cues with good principles of iconic design could provide an extremely powerful and information-rich display which is also quite easy to use. An alternative approach, recently developed at ARC, generates externalized, 3D sound cues over headphones in realtime using digital signal processing. Here, the synthesis technique involves the digital generation of stimuli using Head-Related Transfer Functions (HRTF's) measured in the two ear-canals of individual subjects. Other similar approaches include an analog system developed by Loomis, et. al., (1990) and digital systems which make use of transforms derived from normative mannikins and simulations of room acoustics. Such an interface also requires the careful psychophysical evaluation of listener's ability to accurately localize the virtual or synthetic sound sources. From an applied standpoint, measurement of each potential listener's HRTF's may not be possible in practice. For experienced listeners, localization performance was only slightly degraded compared to a subject's inherent ability. Alternatively, even inexperienced listeners may be able to adapt to a particular set of HRTF's as long as they provide adequate

  2. Duality in binocular rivalry: distinct sensitivity of percept sequence and percept duration to imbalance between monocular stimuli.

    Directory of Open Access Journals (Sweden)

    Chen Song

    Full Text Available BACKGROUND: Visual perception is usually stable and accurate. However, when the two eyes are simultaneously presented with conflicting stimuli, perception falls into a sequence of spontaneous alternations, switching between one stimulus and the other every few seconds. Known as binocular rivalry, this visual illusion decouples subjective experience from physical stimulation and provides a unique opportunity to study the neural correlates of consciousness. The temporal properties of this alternating perception have been intensively investigated for decades, yet the relationship between two fundamental properties - the sequence of percepts and the duration of each percept - remains largely unexplored. METHODOLOGY/PRINCIPAL FINDINGS: Here we examine the relationship between the percept sequence and the percept duration by quantifying their sensitivity to the strength imbalance between two monocular stimuli. We found that the percept sequence is far more susceptible to the stimulus imbalance than does the percept duration. The percept sequence always begins with the stronger stimulus, even when the stimulus imbalance is too weak to cause a significant bias in the percept duration. Therefore, introducing a small stimulus imbalance affects the percept sequence, whereas increasing the imbalance affects the percept duration, but not vice versa. To investigate why the percept sequence is so vulnerable to the stimulus imbalance, we further measured the interval between the stimulus onset and the first percept, during which subjects experienced the fusion of two monocular stimuli. We found that this interval is dramatically shortened with increased stimulus imbalance. CONCLUSIONS/SIGNIFICANCE: Our study shows that in binocular rivalry, the strength imblanace between monocular stimuli has a much greater impact on the percept sequence than on the percept duration, and increasing this imbalance can accelerate the process responsible for the percept sequence.

  3. Sensor Fusion of Monocular Cameras and Laser Rangefinders for Line-Based Simultaneous Localization and Mapping (SLAM Tasks in Autonomous Mobile Robots

    Directory of Open Access Journals (Sweden)

    Xinzheng Zhang

    2012-01-01

    Full Text Available This paper presents a sensor fusion strategy applied for Simultaneous Localization and Mapping (SLAM in dynamic environments. The designed approach consists of two features: (i the first one is a fusion module which synthesizes line segments obtained from laser rangefinder and line features extracted from monocular camera. This policy eliminates any pseudo segments that appear from any momentary pause of dynamic objects in laser data. (ii The second characteristic is a modified multi-sensor point estimation fusion SLAM (MPEF-SLAM that incorporates two individual Extended Kalman Filter (EKF based SLAM algorithms: monocular and laser SLAM. The error of the localization in fused SLAM is reduced compared with those of individual SLAM. Additionally, a new data association technique based on the homography transformation matrix is developed for monocular SLAM. This data association method relaxes the pleonastic computation. The experimental results validate the performance of the proposed sensor fusion and data association method.

  4. The Energy Spectrum of Ultra-High-Energy Cosmic Rays Measured by the Telescope Array FADC Fluorescence Detectors in Monocular Mode

    CERN Document Server

    Abu-Zayyad, T; Allen, M; Anderson, R; Azuma, R; Barcikowski, E; Belz, J W; Bergman, D R; Blake, S A; Cady, R; Cheon, B G; Chiba, J; Chikawa, M; Cho, E J; Cho, W R; Fujii, H; Fujii, T; Fukuda, T; Fukushima, M; Hanlon, W; Hayashi, K; Hayashi, Y; Hayashida, N; Hibino, K; Hiyama, K; Honda, K; Iguchi, T; Ikeda, D; Ikuta, K; Inoue, N; Ishii, T; Ishimori, R; Ito, H; Ivanov, D; Iwamoto, S; Jui, C C H; Kadota, K; Kakimoto, F; Kalashev, O; Kanbe, T; Kasahara, K; Kawai, H; Kawakami, S; Kawana, S; Kido, E; Kim, H B; Kim, H K; Kim, J H; Kitamoto, K; Kitamura, S; Kitamura, Y; Kobayashi, K; Kobayashi, Y; Kondo, Y; Kuramoto, K; Kuzmin, V; Kwon, Y J; Lan, J; Lim, S I; Lundquist, J P; Machida, S; Martens, K; Matsuda, T; Matsuura, T; Matsuyama, T; Matthews, J N; Myers, I; Minamino, M; Miyata, K; Murano, Y; Nagataki, S; Nakamura, T; Nam, S W; Nonaka, T; Ogio, S; Ogura, J; Ohnishi, M; Ohoka, H; Oki, K; Oku, D; Okuda, T; Ono, M; Oshima, A; Ozawa, S; Park, I H; Pshirkov, M S; Rodriguez, D C; Roh, S Y; Rubtsov, G; Ryu, D; Sagawa, H; Sakurai, N; Sampson, A L; Scott, L M; Shah, P D; Shibata, F; Shibata, T; Shimodaira, H; Shin, B K; Shin, J I; Shirahama, T; Smith, J D; Sokolsky, P; Sonley, T J; Springer, R W; Stokes, B T; Stratton, S R; Stroman, T A; Suzuki, S; Takahashi, Y; Takeda, M; Taketa, A; Takita, M; Tameda, Y; Tanaka, H; Tanaka, K; Tanaka, M; Thomas, S B; Thomson, G B; Tinyakov, P; Tkachev, I; Tokuno, H; Tomida, T; Troitsky, S; Tsunesada, Y; Tsutsumi, K; Tsuyuguchi, Y; Uchihori, Y; Udo, S; Ukai, H; Vasiloff, G; Wada, Y; Wong, T; Yamakawa, Y; Yamane, R; Yamaoka, H; Yamazaki, K; Yang, J; Yoneda, Y; Yoshida, S; Yoshii, H; Zollinger, R; Zundel, Z

    2013-01-01

    We present a measurement of the energy spectrum of ultra-high-energy cosmic rays performed by the Telescope Array experiment using monocular observations from its two new FADC-based fluorescence detectors. After a short description of the experiment, we describe the data analysis and event reconstruction procedures. Since the aperture of the experiment must be calculated by Monte Carlo simulation, we describe this calculation and the comparisons of simulated and real data used to verify the validity of the aperture calculation. Finally, we present the energy spectrum calculated from the merged monocular data sets of the two FADC-based detectors, and also the combination of this merged spectrum with an independent, previously published monocular spectrum measurement performed by Telescope Array's third fluorescence detector (Abu-Zayyad {\\it et al.}, {Astropart. Phys.} 39 (2012), 109). This combined spectrum corroborates the recently published Telescope Array surface detector spectrum (Abu-Zayyad {\\it et al.}, ...

  5. Sensor fusion of monocular cameras and laser rangefinders for line-based Simultaneous Localization and Mapping (SLAM) tasks in autonomous mobile robots.

    Science.gov (United States)

    Zhang, Xinzheng; Rad, Ahmad B; Wong, Yiu-Kwong

    2012-01-01

    This paper presents a sensor fusion strategy applied for Simultaneous Localization and Mapping (SLAM) in dynamic environments. The designed approach consists of two features: (i) the first one is a fusion module which synthesizes line segments obtained from laser rangefinder and line features extracted from monocular camera. This policy eliminates any pseudo segments that appear from any momentary pause of dynamic objects in laser data. (ii) The second characteristic is a modified multi-sensor point estimation fusion SLAM (MPEF-SLAM) that incorporates two individual Extended Kalman Filter (EKF) based SLAM algorithms: monocular and laser SLAM. The error of the localization in fused SLAM is reduced compared with those of individual SLAM. Additionally, a new data association technique based on the homography transformation matrix is developed for monocular SLAM. This data association method relaxes the pleonastic computation. The experimental results validate the performance of the proposed sensor fusion and data association method.

  6. Vergence and accommodation to multiple-image-plane stereoscopic displays: ``real world'' responses with practical image-plane separations?

    Science.gov (United States)

    MacKenzie, Kevin J.; Dickson, Ruth A.; Watt, Simon J.

    2012-01-01

    Conventional stereoscopic displays present images on a single focal plane. The resulting mismatch between the stimuli to the eyes' focusing response (accommodation) and to convergence causes fatigue and poor stereo performance. One solution is to distribute image intensity across a number of widely spaced image planes--a technique referred to as depth filtering. Previously, we found this elicits accurate, continuous monocular accommodation responses with image-plane separations as large as 1.1 Diopters (D, the reciprocal of distance in meters), suggesting that a small number of image planes could eliminate vergence-accommodation conflicts over a large range of simulated distances. Evidence exists, however, of systematic differences between accommodation responses to binocular and monocular stimuli when the stimulus to accommodation is degraded, or at an incorrect distance. We examined the minimum image-plane spacing required for accurate accommodation to binocular depth-filtered images. We compared accommodation and vergence responses to changes in depth specified by depth filtering, using image-plane separations of 0.6 to 1.2 D, and equivalent real stimuli. Accommodation responses to real and depth-filtered stimuli were equivalent for image-plane separations of ~0.6 to 0.9 D, but differed thereafter. We conclude that depth filtering can be used to precisely match accommodation and vergence demand in a practical stereoscopic display.

  7. Image Descriptors for Displays

    Science.gov (United States)

    1975-03-01

    hypothetical televison display. The viewing distance is 4 picture heights, and the bandwidth limitation has been set by the U.S. Monochrome Standards...significantly influence the power spectrum over most of the video frequency range. A large dc component and a small random component provide another scene... influences . It was Illuminated with natural light to a brightness of over 300 ft-L. The high brightness levels were chosen so as to nearly reproduce the

  8. Refrigerated display cabinets; Butikskyla

    Energy Technology Data Exchange (ETDEWEB)

    Fahlen, Per

    2000-07-01

    This report summarizes experience from SP research and assignments regarding refrigerated transport and storage of food, mainly in the retail sector. It presents the fundamentals of heat and mass transfer in display cabinets with special focus on indirect systems and secondary refrigerants. Moreover, the report includes a brief account of basic food hygiene and the related regulations. The material has been compiled for educational purposes in the Masters program at Chalmers Technical University.

  9. TrkA activation in the rat visual cortex by antirat trkA IgG prevents the effect of monocular deprivation

    OpenAIRE

    Pizzorusso, Tommaso; Berardi, Nicoletta; Rossi, Francesco M.; Viegi, Alessandro; Venstrom, Kristine; Reichardt, Louis F.; Maffei, Lamberto

    1999-01-01

    It has been recently shown that intraventricular injections of nerve growth factor (NGF) prevent the effects of monocular deprivation in the rat. We have tested the localization and the molecular nature of the NGF receptor(s) responsible for this effect by activating cortical trkA receptors in monocularly deprived rats by cortical infusion of a specific agonist of NGF on trkA, the bivalent antirat trkA IgG (RTA-IgG). TrkA protein was detected by immunoblot in the rat visual cortex during the ...

  10. An effective algorithm for monocular video to stereoscopic video transformation based on three-way Iuminance correction%一种基于三阶亮度校正的平面视频转立体视频快速算法

    Institute of Scientific and Technical Information of China (English)

    郑越; 杨淑莹

    2012-01-01

    This paper presents a new effective algorithm for monocular video stereoscopically transformation. With this algo-rithm, the monocular video can be transformed into stereoscopic format in nearly real time, and the output stream can be shown with lifelike three - dimensional effect on any supported display device. The core idea of this algorithm is to extract images from original monocular video, transform the images into stereoscopic ones according to Gaussian distribution, then build a three - level weighted average brightness map from the generated stereoscopic image sequences, correct the image regions respectively in all three level, and finally compose the complete three-dimensional video. After replacing the traditional time - consuming depth image generation algorithm with this one, the transformation performance obtains significantly improvement. Now the images with three - dimensional stereoscopic effect can be shown in real time during the original monocular video live broadcasts.%本文提出了一种新的平面视频转立体视频的快速算法.这种算法能够实时的将平面视频转换成立体视频,并能在三维显示设备上呈现出逼真的立体效果.首先将原始平面视频中的图像按照高斯分布进行立体变换,然后将视频中的图像序列生成加权平均亮度图,并将亮度分为3个等级,分别对这3个等级区域中的图像进行立体校正,最终得到完整的立体视频.我们的方法替代了传统方法中,生成深度图像的步骤,从而大大的提升了运算的速度,能够在原始平面视频的实时播放过程中,直接输出带有立体效果的画面.

  11. The Effect of a Monocular Helmet-Mounted Display on Aircrew Health: A Longitudinal Cohort Study of Apache AH Mk 1 Pilots -(Vision and Handedness)

    Science.gov (United States)

    2015-05-19

    HCVA values were available for 69 control subjects. For the right eye, the initial mean visual acuity was 0.10 logMAR ( Snellen equivalent of 6/7.8...20/26]); the final right eye mean visual acuity was 0.05 logMAR ( Snellen equivalent of 6/6.9 [20/23]). For the left eye, the initial mean visual...acuity was 0.11 logMAR ( Snellen equivalent of 6/8.1 [20/27]); the final left eye mean visual acuity was 0.06 logMAR ( Snellen equivalent of 6/7.2 [20/24

  12. Integration of monocular motion signals and the analysis of interocular velocity differences for the perception of motion-in-depth.

    Science.gov (United States)

    Shioiri, Satoshi; Kakehi, Daisuke; Tashiro, Tomoyoshi; Yaguchi, Hirohisa

    2009-12-09

    We investigated how the mechanism for perceiving motion-in-depth based on interocular velocity differences (IOVDs) integrates signals from the motion spatial frequency (SF) channels. We focused on the question whether this integration is implemented before or after the comparison of the velocity signals from the two eyes. We measured spatial frequency selectivity of the MAE of motion in depth (3D MAE). The 3D MAE showed little spatial frequency selectivity, whereas the 2D lateral MAE showed clear spatial frequency selectivity in the same condition. This indicates that the outputs of the monocular motion SF channels are combined before analyzing the IOVD. The presumption was confirmed by the disappearance of the 3D MAE after exposure to superimposed gratings with different spatial frequencies moving in opposite directions. The direction of the 2D MAE depended on the test spatial frequency in the same condition. These results suggest that the IOVD is calculated at a relatively later stage of the motion analysis, and that some monocular information is preserved even after the integration of the motion SF channel outputs.

  13. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision

    Institute of Scientific and Technical Information of China (English)

    Wang Xufeng; Kong Xingwei; Zhi Jianhui; Chen Yong; Dong Xinmin

    2015-01-01

    Drogue recognition and 3D locating is a key problem during the docking phase of the autonomous aerial refueling (AAR). To solve this problem, a novel and effective method based on monocular vision is presented in this paper. Firstly, by employing computer vision with red-ring-shape feature, a drogue detection and recognition algorithm is proposed to guarantee safety and ensure the robustness to the drogue diversity and the changes in environmental condi-tions, without using a set of infrared light emitting diodes (LEDs) on the parachute part of the dro-gue. Secondly, considering camera lens distortion, a monocular vision measurement algorithm for drogue 3D locating is designed to ensure the accuracy and real-time performance of the system, with the drogue attitude provided. Finally, experiments are conducted to demonstrate the effective-ness of the proposed method. Experimental results show the performances of the entire system in contrast with other methods, which validates that the proposed method can recognize and locate the drogue three dimensionally, rapidly and precisely.

  14. Book Display as Adult Service

    Directory of Open Access Journals (Sweden)

    Matthew S. Moore

    1997-03-01

    Full Text Available 無Book display as an adult service is defined as choosing and positioning adult books from the collection to increase their circulation. The author contrasts bookstore arrangement for sales versus library arrangement for access. The paper considers the library-as-a-whole as a display, examines the right size for an in-library display, and discusses mass displays, end-caps, on-shelf displays, and the Tiffany approach. The author proposes that an effective display depends on an imaginative, unifying theme, and that book displays are part of the joy of libraries.

  15. Handbook of Visual Display Technology

    CERN Document Server

    Cranton, Wayne; Fihn, Mark

    2012-01-01

    The Handbook of Visual Display Technology is a unique work offering a comprehensive description of the science, technology, economic and human interface factors associated with the displays industry. An invaluable compilation of information, the Handbook will serve as a single reference source with expert contributions from over 150 international display professionals and academic researchers. All classes of display device are covered including LCDs, reflective displays, flexible solutions and emissive devices such as OLEDs and plasma displays, with discussion of established principles, emergent technologies, and particular areas of application. The wide-ranging content also encompasses the fundamental science of light and vision, image manipulation, core materials and processing techniques, display driving and metrology.

  16. Ground moving target geo-location from monocular camera mounted on a micro air vehicle

    Science.gov (United States)

    Guo, Li; Ang, Haisong; Zheng, Xiangming

    2011-08-01

    The usual approaches to unmanned air vehicle(UAV)-to-ground target geo-location impose some severe constraints to the system, such as stationary objects, accurate geo-reference terrain database, or ground plane assumption. Micro air vehicle(MAV) works with characteristics including low altitude flight, limited payload and onboard sensors' low accuracy. According to these characteristics, a method is developed to determine the location of ground moving target which imaged from the air using monocular camera equipped on MAV. This method eliminates the requirements for terrain database (elevation maps) and altimeters that can provide MAV's and target's altitude. Instead, the proposed method only requires MAV flight status provided by its inherent onboard navigation system which includes inertial measurement unit(IMU) and global position system(GPS). The key is to get accurate information on the altitude of the ground moving target. First, Optical flow method extracts background static feature points. Setting a local region around the target in the current image, The features which are on the same plane with the target in this region are extracted, and are retained as aided features. Then, inverse-velocity method calculates the location of these points by integrated with aircraft status. The altitude of object, which is calculated by using position information of these aided features, combining with aircraft status and image coordinates, geo-locate the target. Meanwhile, a framework with Bayesian estimator is employed to eliminate noise caused by camera, IMU and GPS. Firstly, an extended Kalman filter(EKF) provides a simultaneous localization and mapping solution for the estimation of aircraft states and aided features location which defines the moving target local environment. Secondly, an unscented transformation(UT) method determines the estimated mean and covariance of target location from aircraft states and aided features location, and then exports them for the

  17. Latest development of display technologies

    Science.gov (United States)

    Gao, Hong-Yue; Yao, Qiu-Xiang; Liu, Pan; Zheng, Zhi-Qiang; Liu, Ji-Cheng; Zheng, Hua-Dong; Zeng, Chao; Yu, Ying-Jie; Sun, Tao; Zeng, Zhen-Xiang

    2016-09-01

    In this review we will focus on recent progress in the field of two-dimensional (2D) and three-dimensional (3D) display technologies. We present the current display materials and their applications, including organic light-emitting diodes (OLEDs), flexible OLEDs quantum dot light emitting diodes (QLEDs), active-matrix organic light emitting diodes (AMOLEDs), electronic paper (E-paper), curved displays, stereoscopic 3D displays, volumetric 3D displays, light field 3D displays, and holographic 3D displays. Conventional 2D display devices, such as liquid crystal devices (LCDs) often result in ambiguity in high-dimensional data images because of lacking true depth information. This review thus provides a detailed description of 3D display technologies.

  18. O impacto da visão monocular congênita versus adquirida na qualidade de visão autorrelatada

    Directory of Open Access Journals (Sweden)

    Marcelo Caram Ribeiro Fernandes

    2010-12-01

    Full Text Available Objetivos: Quando a visão de um olho está preservada (visão monocular e há alto risco, baixo prognóstico e/ou recursos limitados para a cirurgia do olho contralateral, não é claro se o beneficio da binocularidade supera o da reorientação para visão monocular. O objetivo é quantificar o impacto da qualidade de visão referida entre a condição binocular e monocular e, nesse último caso, entre congênita e adquirida. Métodos: Pacientes com acuidade visual (AV com AV>0,5 em cada olho foram submetidos ao questionário estruturado de 14 perguntas (VF-14, onde a pontuação de 0 a 100 indica o nível de satisfação do paciente com sua visão, variando de baixo a alto respectivamente. Dados epidemiológicos e pontuações dos quatro grupos foram registrados e submetidos à análise estatística. Resultados: A entrevista pelo VF-14 com 56 indivíduos revelou que a pontuação mais alta foi similar entre os controles e os portadores de visão monocular congênita, e níveis intermediários e baixos foram obtidos por indivíduos com visão monocular adquirida e cegos bilaterais, respectivamente (p<0,001. As atividades mais difíceis para os indivíduos com visão monocular adquirida foram identificar letras pequenas, reconhecer pessoas, distinguir sinais de trânsito e assistir TV. Conclusão: O estudo confirmou que a perda da visão tem impacto desfavorável no desempenho referido das atividades sendo maior na visão monocular adquirida do que na congênita. Os dados sugerem que medidas de reabilitação devem ser consideradas para melhorar a qualidade da visão em doenças intratáveis ou de alto risco ou de baixo prognóstico.

  19. Binocularity in the little owl, Athene noctua. II. Properties of visually evoked potentials from the Wulst in response to monocular and binocular stimulation with sine wave gratings.

    Science.gov (United States)

    Porciatti, V; Fontanesi, G; Raffaelli, A; Bagnoli, P

    1990-01-01

    Visually evoked potentials (VEPs) have been recorded from the Wulst surface of the little owl, Athene noctua, in response to counterphase-reversal of sinusoidal gratings with different contrast, spatial frequency and mean luminance, presented either monocularly or binocularly. Monocular full-field stimuli presented to either eye evoked VEPs of similar amplitude, waveform and latency. Under binocular viewing, VEPs approximately doubled in amplitude without waveform changes. VEPs with similar characteristics could be obtained in response to stimulation of the contralateral, but not ipsilateral, hemifield. These results suggest that a 50% recrossing occurs in thalamic efferents and that different ipsilateral and contralateral regions converge onto the same Wulst sites. The VEP amplitude progressively decreased with increase of the spatial frequency beyond 2 cycles/degree, and the high spatial frequency cut-off (VEP acuity) was under binocular viewing (8 cycles/degree) higher than under monocular (5 cycles/degree) viewing (200 cd/m2, 45% contrast). The VEP acuity increased with increase in the contrast and decreased with reduction of the mean luminance. The binocular gain in both VEP amplitude and VEP acuity was largest at the lowest luminance levels. Binocular VEP summation occurred in the medium-high contrast range. With decreased contrast, both monocular and binocular VEPs progressively decreased in amplitude and tended to the same contrast threshold. The VEP contrast threshold depended on the spatial frequency (0.6-1.8% in the range 0.12-2 cycles/degree). Binocular VEPs often showed facilitatory interaction (binocular/monocular amplitude ratio greater than 2), but the binocular VEP amplitude did not change either by changing the stimulus orientation (horizontal vs. vertical gratings) or by inducing different retinal disparities.(ABSTRACT TRUNCATED AT 250 WORDS)

  20. The performances of a super-multiview simulator and the presence of monocular depth sense

    Science.gov (United States)

    Lee, Beom-Ryeol; Park, Jung-Chul; Jeong, Ilkon; Son, Jung-young

    2015-05-01

    A simulator which can test a supermultiview condition is introduced. It allows to view two adjacent view images for each eye simultaneously and display patched images appearing at the viewing zone of a contact-type multiview 3-D display. The accommodation and vergence test with an accommodometer reveals that viewers can verge and accommodate even to the image at 600 mm and 2.7 m from them when a display screen/panel is located at 1.58 m from them. The verging and accommodating distance range is much more than the range 1.3 m ~ 1.9 m determined by the depth of field of the viewers. Furthermore, the patched images also provide a good depth sense which can be better than that from individual view images.

  1. Measuring Algorithm for the Distance to a Preceding Vehicle on Curve Road Using On-Board Monocular Camera

    Science.gov (United States)

    Yu, Guizhen; Zhou, Bin; Wang, Yunpeng; Wun, Xinkai; Wang, Pengcheng

    2015-12-01

    Due to more severe challenges of traffic safety problems, the Advanced Driver Assistance Systems (ADAS) has received widespread attention. Measuring the distance to a preceding vehicle is important for ADAS. However, the existing algorithm focuses more on straight road sections than on curve measurements. In this paper, we present a novel measuring algorithm for the distance to a preceding vehicle on a curve road using on-board monocular camera. Firstly, the characteristics of driving on the curve road is analyzed and the recognition of the preceding vehicle road area is proposed. Then, the vehicle detection and distance measuring algorithms are investigated. We have verified these algorithms on real road driving. The experimental results show that this method proposed in the paper can detect the preceding vehicle on curve roads and accurately calculate the longitudinal distance and horizontal distance to the preceding vehicle.

  2. LHCb Event display

    CERN Document Server

    Trisovic, Ana

    2014-01-01

    The LHCb Event Display was made for educational purposes at the European Organization for Nuclear Research, CERN in Geneva, Switzerland. The project was implemented as a stand-alone application using C++ and ROOT, a framework developed by CERN for data analysis. This paper outlines the development and architecture of the application in detail, as well as the motivation for the development and the goals of the exercise. The application focuses on the visualization of events recorded by the LHCb detector, where an event represents a set of charged particle tracks in one proton-proton collision. Every particle track is coloured by its type and can be selected to see its essential information such as mass and momentum. The application allows students to save this information and calculate the invariant mass for any pair of particles. Furthermore, the students can use additional calculating tools in the application and build up a histogram of these invariant masses. The goal for the students is to find a $D^0$ par...

  3. Colorimetry for CRT displays.

    Science.gov (United States)

    Golz, Jürgen; MacLeod, Donald I A

    2003-05-01

    We analyze the sources of error in specifying color in CRT displays. These include errors inherent in the use of the color matching functions of the CIE 1931 standard observer when only colorimetric, not radiometric, calibrations are available. We provide transformation coefficients that prove to correct the deficiencies of this observer very well. We consider four different candidate sets of cone sensitivities. Some of these differ substantially; variation among candidate cone sensitivities exceeds the variation among phosphors. Finally, the effects of the recognized forms of observer variation on the visual responses (cone excitations or cone contrasts) generated by CRT stimuli are investigated and quantitatively specified. Cone pigment polymorphism gives rise to variation of a few per cent in relative excitation by the different phosphors--a variation larger than the errors ensuing from the adoption of the CIE standard observer, though smaller than the differences between some candidate cone sensitivities. Macular pigmentation has a larger influence, affecting mainly responses to the blue phosphor. The estimated combined effect of all sources of observer variation is comparable in magnitude with the largest differences between competing cone sensitivity estimates but is not enough to disrupt very seriously the relation between the L and M cone weights and the isoluminance settings of individual observers. It is also comparable with typical instrumental colorimetric errors, but we discuss these only briefly.

  4. Monocular inhibition reveals temporal and spatial changes in gene expression in the primary visual cortex of marmoset.

    Directory of Open Access Journals (Sweden)

    Yuki eNakagami

    2013-04-01

    Full Text Available We investigated the time course of the expression of several activity-dependent genes evoked by visual inputs in the primary visual cortex (V1 in adult marmosets. In order to examine the rapid time course of activity-dependent gene expression, marmosets were first monocularly inactivated by tetrodotoxin (TTX, kept in darkness for two days, and then exposed to various length of light stimulation. Activity-dependent genes including HTR1B, HTR2A, whose activity-dependency were previously reported by us, and well-known immediate early genes (IEGs, c-FOS, ZIF268, and ARC, were examined by in situ hybridization. Using this system, first, we demonstrated the ocular dominance type of gene expression pattern in V1 under this condition. IEGs were expressed in columnar patterns throughout layers II-VI of all the tested monocular marmosets. Second, we showed the regulation of HTR1B and HTR2A expressions by retinal spontaneous activity, because HTR1B and HTR2A mRNA expressions sustained a certain level regardless of visual stimulation and were inhibited by a blockade of the retinal activity with TTX. Third, IEGs dynamically changed its laminar distribution from half an hour to several hours upon a stimulus onset with the unique time course for each gene. The expression patterns of these genes were different in neurons of each layer as well. These results suggest that the regulation of each neuron in the primary visual cortex of marmosets is subjected to different regulation upon the change of activities from retina. It should be related to a highly differentiated laminar structure of primate visual systems, reflecting the functions of the activity-dependent gene expression in marmoset V1.

  5. Effects of colored light, color of comparison stimulus, and illumination on error in perceived depth with binocular and monocular viewing.

    Science.gov (United States)

    Huang, Kuo-Chen

    2007-06-01

    Two experiments assessed the effects of colored light, color of a comparison stimulus, and illumination on error in perceived depth with binocular and monocular vision. Exp. 1 assessed effects of colored light, color of comparison stimulus, and source of depth cues on error in perceived depth. A total of 29 women and 19 men, Taiwanese college or graduate students ages 20 to 30 years (M=24.0, SD= 2.5) participated; they were randomly divided into five groups, each being assigned to one of five possible colored light conditions. Analyses showed color of the comparison stimulus significantly affected the error in perceived depth, as this error was significantly greater for a red comparison stimulus than for blue and yellow comparison stimuli. Colored light significantly affected error in perceived depth since error under white and yellow light was significantly less than that under green light. Moreover, error in perceived depth under white light was significantly less than that under blue light but not sensitive to white, yellow, and red light. Error in perceived depth for binocular viewing was significantly less than that for monocular viewing but not sex. In Exp. 2, the effect of illumination on error in perceived depth was explored with 21 women and 15 men, Taiwanese college students with a mean age of 19.8 yr. (SD= 1.1). Analysis indicated that illumination significantly affected error in perceived depth, as error for a 40-W condition was significantly greater than under 20- and 60-W conditions, although the latter were not different.

  6. Effects of brief daily periods of unrestricted vision during early monocular form deprivation on development of visual area 2.

    Science.gov (United States)

    Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Harwerth, Ronald S; Smith, Earl L; Chino, Yuzo M

    2011-09-14

    Providing brief daily periods of unrestricted vision during early monocular form deprivation reduces the depth of amblyopia. To gain insights into the neural basis of the beneficial effects of this treatment, the binocular and monocular response properties of neurons were quantitatively analyzed in visual area 2 (V2) of form-deprived macaque monkeys. Beginning at 3 weeks of age, infant monkeys were deprived of clear vision in one eye for 12 hours every day until 21 weeks of age. They received daily periods of unrestricted vision for 0, 1, 2, or 4 hours during the form-deprivation period. After behavioral testing to measure the depth of the resulting amblyopia, microelectrode-recording experiments were conducted in V2. The ocular dominance imbalance away from the affected eye was reduced in the experimental monkeys and was generally proportional to the reduction in the depth of amblyopia in individual monkeys. There were no interocular differences in the spatial properties of V2 neurons in any subject group. However, the binocular disparity sensitivity of V2 neurons was significantly higher and binocular suppression was lower in monkeys that had unrestricted vision. The decrease in ocular dominance imbalance in V2 was the neuronal change most closely associated with the observed reduction in the depth of amblyopia. The results suggest that the degree to which extrastriate neurons can maintain functional connections with the deprived eye (i.e., reducing undersampling for the affected eye) is the most significant factor associated with the beneficial effects of brief periods of unrestricted vision.

  7. Data Display in Qualitative Research

    Directory of Open Access Journals (Sweden)

    Susana Verdinelli PsyD

    2013-02-01

    Full Text Available Visual displays help in the presentation of inferences and conclusions and represent ways of organizing, summarizing, simplifying, or transforming data. Data displays such as matrices and networks are often utilized to enhance data analysis and are more commonly seen in quantitative than in qualitative studies. This study reviewed the data displays used by three prestigious qualitative research journals within a period of three years. The findings include the types of displays used in these qualitative journals, the frequency of use, and the purposes for using visual displays as opposed to presenting data in text.

  8. Unique interactive projection display screen

    Energy Technology Data Exchange (ETDEWEB)

    Veligdan, J.T.

    1997-11-01

    Projection systems continue to be the best method to produce large (1 meter and larger) displays. However, in order to produce a large display, considerable volume is typically required. The Polyplanar Optic Display (POD) is a novel type of projection display screen, which for the first time, makes it possible to produce a large projection system that is self-contained and only inches thick. In addition, this display screen is matte black in appearance allowing it to be used in high ambient light conditions. This screen is also interactive and can be remotely controlled via an infrared optical pointer resulting in mouse-like control of the display. Furthermore, this display need not be flat since it can be made curved to wrap around a viewer as well as being flexible.

  9. Augmenting digital displays with computation

    Science.gov (United States)

    Liu, Jing

    As we inevitably step deeper and deeper into a world connected via the Internet, more and more information will be exchanged digitally. Displays are the interface between digital information and each individual. Naturally, one fundamental goal of displays is to reproduce information as realistically as possible since humans still care a lot about what happens in the real world. Human eyes are the receiving end of such information exchange; therefore it is impossible to study displays without studying the human visual system. In fact, the design of displays is rather closely coupled with what human eyes are capable of perceiving. For example, we are less interested in building displays that emit light in the invisible spectrum. This dissertation explores how we can augment displays with computation, which takes both display hardware and the human visual system into consideration. Four novel projects on display technologies are included in this dissertation: First, we propose a software-based approach to driving multiview autostereoscopic displays. Our display algorithm can dynamically assign views to hardware display zones based on multiple observers' current head positions, substantially reducing crosstalk and stereo inversion. Second, we present a dense projector array that creates a seamless 3D viewing experience for multiple viewers. We smoothly interpolate the set of viewer heights and distances on a per-vertex basis across the arrays field of view, reducing image distortion, crosstalk, and artifacts from tracking errors. Third, we propose a method for high dynamic range display calibration that takes into account the variation of the chrominance error over luminance. We propose a data structure for enabling efficient representation and querying of the calibration function, which also allows user-guided balancing between memory consumption and the amount of computation. Fourth, we present user studies that demonstrate that the ˜ 60 Hz critical flicker fusion

  10. Rapid display of radiographic images

    Science.gov (United States)

    Cox, Jerome R., Jr.; Moore, Stephen M.; Whitman, Robert A.; Blaine, G. James; Jost, R. Gilbert; Karlsson, L. M.; Monsees, Thomas L.; Hassen, Gregory L.; David, Timothy C.

    1991-07-01

    The requirements for the rapid display of radiographic images exceed the capabilities of widely available display, computer, and communications technologies. Computed radiography captures data with a resolution of about four megapixels. Large-format displays are available that can present over four megapixels. One megapixel displays are practical for use in combination with large-format displays and in areas where the viewing task does not require primary diagnosis. This paper describes an electronic radiology system that approximates the highest quality systems, but through the use of several interesting techniques allows the possibility of its widespread installation throughout hospitals. The techniques used can be grouped under three major system concepts: a local, high-speed image server, one or more physician's workstations each with one or more high-performance auxiliary displays specialized to the radiology viewing task, and dedicated, high-speed communication links between the server and the displays. This approach is enhanced by the use of a progressive transmission scheme to decrease the latency for viewing four megapixel images. The system includes an image server with storage for over 600 4-megapixel images and a high-speed link. A subsampled megapixel image is fetched from disk and transmitted to the display in about one second followed by the full resolution 4-megapixel image in about 2.5 seconds. Other system components include a megapixel display with a 6-megapixel display memory space and frame-rate update of image roam, zoom, and contrast. Plans for clinical use are presented.

  11. Military display market segment: helicopters

    Science.gov (United States)

    Desjardins, Daniel D.; Hopper, Darrel G.

    2004-09-01

    The military display market is analyzed in terms of one of its segments: helicopter displays. Parameters requiring special consideration, to include luminance ranges, contrast ratio, viewing angles, and chromaticity coordinates, are examined. Performance requirements for rotary-wing displays relative to several premier applications are summarized. Display sizes having aggregate defense applications of 5,000 units or greater and having DoD applications across 10 or more platforms, are tabulated. The issue of size commonality is addressed where distribution of active area sizes across helicopter platforms, individually, in groups of two through nine, and ten or greater, is illustrated. Rotary-wing displays are also analyzed by technology, where total quantities of such displays are broken out into CRT, LCD, AMLCD, EM, LED, Incandescent, Plasma and TFEL percentages. Custom, versus Rugged commercial, versus commercial off-the-shelf designs are contrasted. High and low information content designs are identified. Displays for several high-profile military helicopter programs are discussed, to include both technical specifications and program history. The military display market study is summarized with breakouts for the helicopter market segment. Our defense-wide study as of March 2004 has documented 1,015,494 direct view and virtual image displays distributed across 1,181 display sizes and 503 weapon systems. Helicopter displays account for 67,472 displays (just 6.6% of DoD total) and comprise 83 sizes (7.0% of total DoD) in 76 platforms (15.1% of total DoD). Some 47.6% of these rotary-wing applications involve low information content displays comprising just a few characters in one color; however, as per fixed-wing aircraft, the predominant instantiation involves higher information content units capable of showing changeable graphics, color and video.

  12. Evaluating motion parallax and stereopsis as depth cues for autostereoscopic displays

    Science.gov (United States)

    Braun, Marius; Leiner, Ulrich; Ruschin, Detlef

    2011-03-01

    The perception of space in the real world is based on multifaceted depth cues, most of them monocular, some binocular. Developing 3D-displays raises the question, which of these depth cues are predominant and should be simulated by computational means in such a panel. Beyond the cues based on image content, such as shadows or patterns, Stereopsis and depth from motion parallax are the most significant mechanisms supporting observers with depth information. We set up a carefully designed test situation, widely excluding undesired other distance hints. Thereafter we conducted a user test to find out, which of these two depth cues is more relevant and whether a combination of both would increase accuracy in a depth estimation task. The trials were conducting utilizing our autostereoscopic "Free2C"-displays, which are capable to detect the user eye position and steer the image lobes dynamically into that direction. At the same time, eye position was used to update the virtual camera's location and thereby offering motion parallax to the observer. As far as we know, this was the first time that such a test has been conducted using an autosteresocopic display without any assistive technologies. Our results showed, in accordance with prior experiments, that both cues are effective, however Stereopsis is by order of magnitude more relevant. Combining both cues improved the precision of distance estimation by another 30-40%.

  13. X-1 on display

    Science.gov (United States)

    1949-01-01

    A Bell Aircraft Corporation X-1 series aircraft on display at an Open House at NACA Muroc Flight Test Unit or High-Speed Flight Research Station hangar on South Base of Edwards Air Force Base, California. (The precise date of the photo is uncertain, but it is probably before 1948.) The instrumentation that was carried aboard the aircraft to gather data is on display. The aircraft data was recorded on oscillograph film that was read, calibrated, and converted into meaningful parameters for the engineers to evaluate from each research flight. In the background of the photo are several early U.S. jets. These include several Lockheed P-80 Shooting Stars, which were used as chase planes on X-1 flights; two Bell P-59 Airacomets, the first U.S. jet pursuit aircraft (fighter in later parlance); and a prototype Republic XP-84 Thunderjet. There were five versions of the Bell X-1 rocket-powered research aircraft that flew at the NACA High-Speed Flight Research Station, Edwards, California. The bullet-shaped X-1 aircraft were built by Bell Aircraft Corporation, Buffalo, N.Y. for the U.S. Army Air Forces (after 1947, U.S. Air Force) and the National Advisory Committee for Aeronautics (NACA). The X-1 Program was originally designated the XS-1 for eXperimental Sonic. The X-1's mission was to investigate the transonic speed range (speeds from just below to just above the speed of sound) and, if possible, to break the 'sound barrier.' Three different X-1s were built and designated: X-1-1, X-1-2 (later modified to become the X-1E), and X-1-3. The basic X-1 aircraft were flown by a large number of different pilots from 1946 to 1951. The X-1 Program not only proved that humans could go beyond the speed of sound, it reinforced the understanding that technological barriers could be overcome. The X-1s pioneered many structural and aerodynamic advances including extremely thin, yet extremely strong wing sections; supersonic fuselage configurations; control system requirements; powerplant

  14. Laser illuminated flat panel display

    Energy Technology Data Exchange (ETDEWEB)

    Veligdan, J.T.

    1995-12-31

    A 10 inch laser illuminated flat panel Planar Optic Display (POD) screen has been constructed and tested. This POD screen technology is an entirely new concept in display technology. Although the initial display is flat and made of glass, this technology lends itself to applications where a plastic display might be wrapped around the viewer. The display screen is comprised of hundreds of planar optical waveguides where each glass waveguide represents a vertical line of resolution. A black cladding layer, having a lower index of refraction, is placed between each waveguide layer. Since the cladding makes the screen surface black, the contrast is high. The prototype display is 9 inches wide by 5 inches high and approximately I inch thick. A 3 milliwatt HeNe laser is used as the illumination source and a vector scanning technique is employed.

  15. Miniature information displays: primary applications

    Science.gov (United States)

    Alvelda, Phillip; Lewis, Nancy D.

    1998-04-01

    Positioned to replace current liquid crystal display technology in many applications, miniature information displays have evolved to provide several truly portable platforms for the world's growing personal computing and communication needs. The technology and functionality of handheld computer and communicator systems has finally surpassed many of the standards that were originally established for desktop systems. In these new consumer electronics, performance, display size, packaging, power consumption, and cost have always been limiting factors for fabricating genuinely portable devices. The rapidly growing miniature information display manufacturing industry is making it possible to bring a wide range of highly anticipated new products to new markets.

  16. Colorimetric evaluation of display performance

    Science.gov (United States)

    Kosmowski, Bogdan B.

    2001-08-01

    The development of information techniques, using new technologies, physical phenomena and coding schemes, enables new application areas to be benefited form the introduction of displays. The full utilization of the visual perception of a human operator, requires the color coding process to be implemented. The evolution of displays, from achromatic (B&W) and monochromatic, to multicolor and full-color, enhances the possibilities of information coding, creating however a need for the quantitative methods of display parameter assessment. Quantitative assessment of color displays, restricted to photometric measurements of their parameters, is an estimate leading to considerable errors. Therefore, the measurements of a display's color properties have to be based on spectral measurements of the display and its elements. The quantitative assessment of the display system parameters should be made using colorimetric systems like CIE1931, CIE1976 LAB or LUV. In the paper, the constraints on the measurement method selection for the color display evaluation are discussed and the relations between their qualitative assessment and the ergonomic conditions of their application are also presented. The paper presents the examples of using LUV colorimetric system and color difference (Delta) E in the optimization of color liquid crystal displays.

  17. Updated defense display market assessment

    Science.gov (United States)

    Desjardins, Daniel D.; Hopper, Darrel G.

    1999-08-01

    This paper addresses the number, function and size of principal military displays and establishes a basis to determine the opportunities for technology insertion in the immediate future and into the next millennium. Principal military displays are defined as those occupying appreciable crewstation real-estate and/or those without which the platform could not carry out its intended mission. DoD 'office' applications are excluded from this study. The military displays market is specified by such parameters as active area and footprint size, and other characteristics such as luminance, gray scale, resolution, angle, color, video capability, and night vision imaging system compatibility. Funded, future acquisitions, planned and predicted crewstation modification kits, and form-fit upgrades are taken into account. This paper provides an overview of the DoD niche market, allowing both government and industry a necessary reference by which to meet DoD requirements for military displays in a timely and cost-effective manner. The aggregate DoD installed base for direct-view and large-area military displays is presently estimated to be in excess of 313,000. Miniature displays are those which must be magnified to be viewed, involve a significantly different manufacturing paradigm and are used in helmet mounted displays and thermal weapon sight applications. Some 114,000 miniature displays are presently included within future weapon system acquisition plans. For vendor production planning purposes it is noted that foreign military sales could substantially increase these quantities. The vanishing vendor syndrome (VVS) for older display technologies continues to be a growing, pervasive problem throughout DoD, which consequently must leverage the more modern, especially flat panel, display technologies being developed to replace older, especially cathode ray tube, technology for civil-commercial markets. Total DoD display needs (FPD, HMD) are some 427,000.

  18. Design and fabrication of concave-convex lens for head mounted virtual reality 3D glasses

    Science.gov (United States)

    Deng, Zhaoyang; Cheng, Dewen; Hu, Yuan; Huang, Yifan; Wang, Yongtian

    2015-08-01

    As a kind of light-weighted and convenient tool to achieve stereoscopic vision, virtual reality glasses are gaining more popularity nowadays. For these glasses, molded plastic lenses are often adopted to handle both the imaging property and the cost of massive production. However, the as-built performance of the glass depends on both the optical design and the injection molding process, and maintaining the profile of the lens during injection molding process presents particular challenges. In this paper, optical design is combined with processing simulation analysis to obtain a design result suitable for injection molding. Based on the design and analysis results, different experiments are done using high-quality equipment to optimize the process parameters of injection molding. Finally, a single concave-convex lens is designed with a field-of-view of 90° for the virtual reality 3D glasses. The as-built profile error of the glass lens is controlled within 5μm, which indicates that the designed shape of the lens is fairly realized and the designed optical performance can thus be achieved.

  19. Tracking of Head Position Relative to the Screen Using Head Mounted Camera

    Directory of Open Access Journals (Sweden)

    Evaldas Borcovas

    2013-05-01

    Full Text Available In this paper head position locating systems were analyzed. There were reviewed scientific articles with different proposed methods. The chosen system is with the camera located on the user head. The main parameters of the head positioning systems were analyzed. The procedure laid down in what order parameters are found. The diagram of the system and detail block diagram of the algorithm were provided. Realization of the algorithm used: edge detection method (Sobel, the adjustment algorithm (Subpixel. System is realized in Matlab and C# environment. Determine the optimal parameters for the algorithm execution. Execution of the algorithm in Matlab environment is 1.2 s and C # environment – 126 ms. During the examination of the longest executing algorithm segment, it was found that image filtering is carried out in 107 ms. It is noted that the uncertainty of the algorithm can be divided into static and measurement. The maximum static uncertainty while measuring head position parameters is 1.63 mm and orientation parameters – 0.16°. The maximum measurement uncertainty while measuring head position parameters is 4 mm and orientation parameters – 0.11°.Article in Lithuanian

  20. mRNAs coding for neurotransmitter receptors and voltage-gated sodium channels in the adult rabbit visual cortex after monocular deafferentiation

    Science.gov (United States)

    Nguyen, Quoc-Thang; Matute, Carlos; Miledi, Ricardo

    1998-01-01

    It has been postulated that, in the adult visual cortex, visual inputs modulate levels of mRNAs coding for neurotransmitter receptors in an activity-dependent manner. To investigate this possibility, we performed a monocular enucleation in adult rabbits and, 15 days later, collected their left and right visual cortices. Levels of mRNAs coding for voltage-activated sodium channels, and for receptors for kainate/α-amino-3-hydroxy-5-methylisoxazole-4-propionic acid (AMPA), N-methyl-d-aspartate (NMDA), γ-aminobutyric acid (GABA), and glycine were semiquantitatively estimated in the visual cortices ipsilateral and contralateral to the lesion by the Xenopus oocyte/voltage-clamp expression system. This technique also allowed us to study some of the pharmacological and physiological properties of the channels and receptors expressed in the oocytes. In cells injected with mRNA from left or right cortices of monocularly enucleated and control animals, the amplitudes of currents elicited by kainate or AMPA, which reflect the abundance of mRNAs coding for kainate and AMPA receptors, were similar. There was no difference in the sensitivity to kainate and in the voltage dependence of the kainate response. Responses mediated by NMDA, GABA, and glycine were unaffected by monocular enucleation. Sodium channel peak currents, activation, steady-state inactivation, and sensitivity to tetrodotoxin also remained unchanged after the enucleation. Our data show that mRNAs for major neurotransmitter receptors and ion channels in the adult rabbit visual cortex are not obviously modified by monocular deafferentiation. Thus, our results do not support the idea of a widespread dynamic modulation of mRNAs coding for receptors and ion channels by visual activity in the rabbit visual system. PMID:9501250

  1. Flexible Bistable Cholesteric Reflective Displays

    Science.gov (United States)

    Yang, Deng-Ke

    2006-03-01

    Cholesteric liquid crystals (ChLCs) exhibit two stable states at zero field condition-the reflecting planar state and the nonreflecting focal conic state. ChLCs are an excellent candidate for inexpensive and rugged electronic books and papers. This paper will review the display cell structure,materials and drive schemes for flexible bistable cholesteric (Ch) reflective displays.

  2. Three-dimensional display technologies.

    Science.gov (United States)

    Geng, Jason

    2013-01-01

    The physical world around us is three-dimensional (3D), yet traditional display devices can show only two-dimensional (2D) flat images that lack depth (i.e., the third dimension) information. This fundamental restriction greatly limits our ability to perceive and to understand the complexity of real-world objects. Nearly 50% of the capability of the human brain is devoted to processing visual information [Human Anatomy & Physiology (Pearson, 2012)]. Flat images and 2D displays do not harness the brain's power effectively. With rapid advances in the electronics, optics, laser, and photonics fields, true 3D display technologies are making their way into the marketplace. 3D movies, 3D TV, 3D mobile devices, and 3D games have increasingly demanded true 3D display with no eyeglasses (autostereoscopic). Therefore, it would be very beneficial to readers of this journal to have a systematic review of state-of-the-art 3D display technologies.

  3. Metabolic Changes in the Bilateral Visual Cortex of the Monocular Blind Macaque: A Multi-Voxel Proton Magnetic Resonance Spectroscopy Study.

    Science.gov (United States)

    Wu, Lingjie; Tang, Zuohua; Feng, Xiaoyuan; Sun, Xinghuai; Qian, Wen; Wang, Jie; Jin, Lixin; Jiang, Jingxuan; Zhong, Yufeng

    2017-02-01

    The metabolic changes accompanied with adaptive plasticity in the visual cortex after early monocular visual loss were unclear. In this study, we detected the metabolic changes in bilateral visual cortex of normal (group A) and monocular blind macaque (group B) for studying the adaptive plasticity using multi-voxel proton magnetic resonance spectroscopy ((1)H-MRS) at 32 months after right optic nerve transection. Then, we compared the N-Acetyl aspartate (NAA)/Creatine (Cr), myoinositol (Ins)/Cr, choline (Cho)/Cr and Glx (Glutamate + glutamine)/Cr ratios in the visual cortex between two groups, as well as between the left and right visual cortex of group A and B. Compared with group A, in the bilateral visual cortex, a decreased NAA/Cr and Glx/Cr ratios in group B were found, which was more clearly in the right visual cortex; whereas the Ins/Cr and Cho/Cr ratios of group B were increased. All of these findings were further confirmed by immunohistochemical staining. In conclusion, the difference of metabolic ratios can be detected by multi-voxel (1)H-MRS in the visual cortex between groups A and B, which was valuable for investigating the adaptive plasticity of monocular blind macaque.

  4. A Re-Evaluation of Achromatic Spatiotemporal Vision: Nonoriented Filters are Monocular, they Adapt and Can be Used for Decision-Making at High Flicker Speeds

    Directory of Open Access Journals (Sweden)

    Tim S. Meese

    2011-05-01

    Full Text Available Masking, adaptation, and summation paradigms have been used to investigate the characteristics of early spatiotemporal vision. Each has been taken to provide evidence for (i oriented and (ii nonoriented spatial filtering mechanisms. However, subsequent findings suggest that the evidence for nonoriented mechanisms has been misinterpreted: possibly, those experiments revealed the characteristics of suppression (e.g., gain control not excitation, or merely the isotropic subunits of the oriented detecting-mechanisms. To shed light on this, we used all three paradigms to focus on the “high-speed” corner of spatiotemporal vision (low spatial frequency, high temporal frequency where cross-oriented achromatic effects are greatest. We used flickering Gabor patches as targets and a 2IFC procedure for monocular, binocular and dichoptic stimulus presentations. To account for our results we devised a simple model involving an isotropic monocular filter-stage feeding orientation-tuned binocular filters. Both filter stages are adaptable and their outputs are available to the decision-stage following nonlinear contrast transduction. However, the monocular isotropic filters adapt only to high-speed stimuli—consistent with a magnocellular sub-cortical substrate—and benefit decision making only for high-speed stimuli. According to this model, the visual processes revealed by masking, adaptation and summation are related but not identical.

  5. Comparison of the monocular Humphrey visual field and the binocular Humphrey esterman visual field test for driver licensing in glaucoma subjects in Sweden

    Directory of Open Access Journals (Sweden)

    Ayala Marcelo

    2012-08-01

    Full Text Available Abstract Background The purpose of this study was to compare the monocular Humphrey Visual Field (HVF with the binocular Humphrey Esterman Visual Field (HEVF for determining whether subjects suffering from glaucoma fulfilled the new medical requirements for possession of a Swedish driver’s license. Methods HVF SITA Fast 24–2 full threshold (monocularly and HEVF (binocularly were performed consecutively on the same day on 40 subjects with glaucomatous damage of varying degrees in both eyes. Assessment of results was constituted as either “pass” or “fail”, according to the new medical requirements put into effect September 1, 2010 by the Swedish Transport Agency. Results Forty subjects were recruited and participated in the study. Sixteen subjects passed both tests, and sixteen subjects failed both tests. Eight subjects passed the HEFV but failed the HVF. There was a significant difference between HEVF and HVF (χ2, p = 0.004. There were no subjects who passed the HVF, but failed the HEVF. Conclusions The monocular visual field test (HVF gave more specific information about the location and depth of the defects, and therefore is the overwhelming method of choice for use in diagnostics. The binocular visual field test (HEVF seems not be as efficient as the HVF in finding visual field defects in glaucoma subjects, and is therefore doubtful in evaluating visual capabilities in traffic situations.

  6. Comparison of the monocular Humphrey Visual Field and the binocular Humphrey Esterman Visual Field test for driver licensing in glaucoma subjects in Sweden.

    Science.gov (United States)

    Ayala, Marcelo

    2012-08-02

    The purpose of this study was to compare the monocular Humphrey Visual Field (HVF) with the binocular Humphrey Esterman Visual Field (HEVF) for determining whether subjects suffering from glaucoma fulfilled the new medical requirements for possession of a Swedish driver's license. HVF SITA Fast 24-2 full threshold (monocularly) and HEVF (binocularly) were performed consecutively on the same day on 40 subjects with glaucomatous damage of varying degrees in both eyes. Assessment of results was constituted as either "pass" or "fail", according to the new medical requirements put into effect September 1, 2010 by the Swedish Transport Agency. Forty subjects were recruited and participated in the study. Sixteen subjects passed both tests, and sixteen subjects failed both tests. Eight subjects passed the HEFV but failed the HVF. There was a significant difference between HEVF and HVF (χ(2), p = 0.004). There were no subjects who passed the HVF, but failed the HEVF. The monocular visual field test (HVF) gave more specific information about the location and depth of the defects, and therefore is the overwhelming method of choice for use in diagnostics. The binocular visual field test (HEVF) seems not be as efficient as the HVF in finding visual field defects in glaucoma subjects, and is therefore doubtful in evaluating visual capabilities in traffic situations.

  7. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  8. Flat panel display - Impurity doping technology for flat panel displays

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Toshiharu [Advanced Technology Planning, Sumitomo Eaton Nova Corporation, SBS Tower 9F, 10-1, Yoga 4-chome, Setagaya-ku, 158-0097 Tokyo (Japan)]. E-mail: suzuki_tsh@senova.co.jp

    2005-08-01

    Features of the flat panel displays (FPDs) such as liquid crystal display (LCD) and organic light emitting diode (OLED) display, etc. using low temperature poly-Si (LTPS) thin film transistors (TFTs) are briefly reviewed comparing with other FPDs. The requirements for fabricating TFTs used for high performance FPDs and system on glass (SoG) are addressed. This paper focuses on the impurity doping technology, which is one of the key technologies together with crystallization by laser annealing, formation of high quality gate insulator and gate-insulator/poly-Si interface. The issues to be solved in impurity doping technology for state of the art and future TFTs are clarified.

  9. Helmet-mounted display technology on the VISTA NF-16D

    Science.gov (United States)

    Underhill, Gregory P.; Bailey, Randall E.; Markman, Steve

    1997-06-01

    Wright Laboratory's Variable-Stability In-Flight Simulator Test Aircraft (VISTA) NF-16D is the newest in-flight simulator in the USAF inventory. A unique research aircraft, it will perform a multitude of missions: to develop and evaluate flight characteristics of new aircraft that have not yet flown, and perform research in the areas of flying qualities, flight control design, pilot-vehicle interface, weapons and avionics integration, and to train new test pilots. The VISTA upgrade will enhance the simulation fidelity and research capabilities by adding a programmable helmet-mounted display (HMD) and head-up display (HUD) in the front cockpit. The programmable HMD consists of a GEC- Marconi Avionics Viper II Helmet-Mounted Optics Module integrated with a modified Helmet Integrated Systems Limited HGU-86/P helmet, the Honeywell Advanced Metal Tolerant tracker, and a GEC-Mounted Tolerant tracker, and a GEC- Marconi Avionics Programmable Display Generator. This system will provide a real-time programmable HUD and monocular stroke capable HMD in the front cockpit. The HMD system is designed for growth to stroke-on-video, binocular capability. This paper examines some of issues associated with current HMD development, and explains the value of rapid prototyping or 'quick-look' flight testing on the VISTA NF-16D. A brief overview of the VISTA NF-16D and the hardware and software modifications made to incorporate the programmable display system is give, as well as a review of several key decisions that were made in the programmable display system implementation. The system's capabilities and what they mean to potential users and designers are presented, particularly for pilot-vehicle interface research.

  10. Tone compatibility between HDR displays

    Science.gov (United States)

    Bist, Cambodge; Cozot, Rémi; Madec, Gérard; Ducloux, Xavier

    2016-09-01

    High Dynamic Range (HDR) is the latest trend in television technology and we expect an in ux of HDR capable consumer TVs in the market. Initial HDR consumer displays will operate on a peak brightness of about 500-1000 nits while in the coming years display peak brightness is expected to go beyond 1000 nits. However, professionally graded HDR content can range from 1000 to 4000 nits. As with Standard Dynamic Range (SDR) content, we can expect HDR content to be available in variety of lighting styles such as low key, medium key and high key video. This raises concerns over tone-compatibility between HDR displays especially when adapting to various lighting styles. It is expected that dynamic range adaptation between HDR displays uses similar techniques as found with tone mapping and tone expansion operators. In this paper, we survey simple tone mapping methods of 4000 nits color-graded HDR content for 1000 nits HDR displays. We also investigate tone expansion strategies when HDR content graded in 1000 nits is displayed on 4000 nits HDR monitors. We conclude that the best tone reproduction technique between HDR displays strongly depends on the lighting style of the content.

  11. Peculiarities of vernier monocular and binocular visual acuity in the retinal orthogonal meridians in patients with hypermetropic astigmatism

    Directory of Open Access Journals (Sweden)

    Владимир Александрович Коломиец

    2015-06-01

    Full Text Available It was carried out an examination of meridional vernier visual acuity in 100 patients 7-25 years old with a simple and compound hypermetropic astigmatism and refractive ambyiopia. An astigmatic component of refraction was in range 0,5- 2,5 dptr. Visual acuity on the sighting eyes after correction was 0,9- 1,0, on eyes with amblyopia 0,4 - 0,85 relative units.Methods. Visual acuity was defined by the Landolt rings of Sivtsev table. Vernier visual acuity was defined in seconds of arc from 5 km, using special computer program.Result. It was demonstrated that in patients with the simple hypertropic astigmatism diagnosis of meridional amblyopia can be specified by the comparison of data of monocular and binocular vernier visual acuity in orthogonal meridians of retinas.Conclusions. An effect of the rise of meridional binocular visual acuity in one of meridians and its absence in another one allows define selective meridional disturbances of the visual acuity

  12. Rapid recovery from the effects of early monocular deprivation is enabled by temporary inactivation of the retinas

    Science.gov (United States)

    Fong, Ming-fai; Mitchell, Donald E.; Duffy, Kevin R.; Bear, Mark F.

    2016-01-01

    A half-century of research on the consequences of monocular deprivation (MD) in animals has revealed a great deal about the pathophysiology of amblyopia. MD initiates synaptic changes in the visual cortex that reduce acuity and binocular vision by causing neurons to lose responsiveness to the deprived eye. However, much less is known about how deprivation-induced synaptic modifications can be reversed to restore normal visual function. One theoretically motivated hypothesis is that a period of inactivity can reduce the threshold for synaptic potentiation such that subsequent visual experience promotes synaptic strengthening and increased responsiveness in the visual cortex. Here we have reduced this idea to practice in two species. In young mice, we show that the otherwise stable loss of cortical responsiveness caused by MD is reversed when binocular visual experience follows temporary anesthetic inactivation of the retinas. In 3-mo-old kittens, we show that a severe impairment of visual acuity is also fully reversed by binocular experience following treatment and, further, that prolonged retinal inactivation alone can erase anatomical consequences of MD. We conclude that temporary retinal inactivation represents a highly efficacious means to promote recovery of function. PMID:27856748

  13. A 3D Human Skeletonization Algorithm for a Single Monocular Camera Based on Spatial–Temporal Discrete Shadow Integration

    Directory of Open Access Journals (Sweden)

    Jie Hou

    2017-07-01

    Full Text Available Three-dimensional (3D human skeleton extraction is a powerful tool for activity acquirement and analyses, spawning a variety of applications on somatosensory control, virtual reality and many prospering fields. However, the 3D human skeletonization relies heavily on RGB-Depth (RGB-D cameras, expensive wearable sensors and specific lightening conditions, resulting in great limitation of its outdoor applications. This paper presents a novel 3D human skeleton extraction method designed for the monocular camera large scale outdoor scenarios. The proposed algorithm aggregates spatial–temporal discrete joint positions extracted from human shadow on the ground. Firstly, the projected silhouette information is recovered from human shadow on the ground for each frame, followed by the extraction of two-dimensional (2D joint projected positions. Then extracted 2D joint positions are categorized into different sets according to activity silhouette categories. Finally, spatial–temporal integration of same-category 2D joint positions is carried out to generate 3D human skeletons. The proposed method proves accurate and efficient in outdoor human skeletonization application based on several comparisons with the traditional RGB-D method. Finally, the application of the proposed method to RGB-D skeletonization enhancement is discussed.

  14. A Two-Stage Bayesian Network Method for 3D Human Pose Estimation from Monocular Image Sequences

    Directory of Open Access Journals (Sweden)

    Wang Yuan-Kai

    2010-01-01

    Full Text Available Abstract This paper proposes a novel human motion capture method that locates human body joint position and reconstructs the human pose in 3D space from monocular images. We propose a two-stage framework including 2D and 3D probabilistic graphical models which can solve the occlusion problem for the estimation of human joint positions. The 2D and 3D models adopt directed acyclic structure to avoid error propagation of inference. Image observations corresponding to shape and appearance features of humans are considered as evidence for the inference of 2D joint positions in the 2D model. Both the 2D and 3D models utilize the Expectation Maximization algorithm to learn prior distributions of the models. An annealed Gibbs sampling method is proposed for the two-stage method to inference the maximum posteriori distributions of joint positions. The annealing process can efficiently explore the mode of distributions and find solutions in high-dimensional space. Experiments are conducted on the HumanEva dataset with image sequences of walking motion, which has challenges of occlusion and loss of image observations. Experimental results show that the proposed two-stage approach can efficiently estimate more accurate human poses.

  15. Ten inch Planar Optic Display

    Energy Technology Data Exchange (ETDEWEB)

    Beiser, L. [Beiser (Leo) Inc., Flushing, NY (United States); Veligdan, J. [Brookhaven National Lab., Upton, NY (United States)

    1996-04-01

    A Planar Optic Display (POD) is being built and tested for suitability as a high brightness replacement for the cathode ray tube, (CRT). The POD display technology utilizes a laminated optical waveguide structure which allows a projection type of display to be constructed in a thin (I to 2 inch) housing. Inherent in the optical waveguide is a black cladding matrix which gives the display a black appearance leading to very high contrast. A Digital Micromirror Device, (DMD) from Texas Instruments is used to create video images in conjunction with a 100 milliwatt green solid state laser. An anamorphic optical system is used to inject light into the POD to form a stigmatic image. In addition to the design of the POD screen, we discuss: image formation, image projection, and optical design constraints.

  16. ENERGY STAR Certified Displays - Deprecated

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset is up-to-date but newer better data can be retrieved at: https://data.energystar.gov/dataset/ENERGY-STAR-Certified-Displays/xsyb-v8gs Certified models...

  17. Ultraminiature, Micropower Multipurpose Display Project

    Data.gov (United States)

    National Aeronautics and Space Administration — High information content electronic displays remain the most difficult element of the human-machine interface to effectively miniaturize. Mobile applications need a...

  18. Effective color design for displays

    Science.gov (United States)

    MacDonald, Lindsay W.

    2002-06-01

    Visual communication is a key aspect of human-computer interaction, which contributes to the satisfaction of user and application needs. For effective design of presentations on computer displays, color should be used in conjunction with the other visual variables. The general needs of graphic user interfaces are discussed, followed by five specific tasks with differing criteria for display color specification - advertising, text, information, visualization and imaging.

  19. Performance studies of electrochromic displays

    Science.gov (United States)

    Ionescu, Ciprian; Dobre, Robert Alexandru

    2015-02-01

    The idea of having flexible, very thin, light, low power and even low cost display devices implemented using new materials and technologies is very exciting. Nowadays we can talk about more than just concepts, such devices exist, and they are part of an emerging concept: FOLAE (Flexible Organic and Large Area Electronics). Among the advantages of electrochromic devices are the low power consumption (they are non-emissive, i.e. passive) and the aspect like ink on paper with good viewing angle. Some studies are still necessary for further development, before proper performances are met and the functional behavior can be predicted. This paper presents the results of the research activity conducted to develop electric characterization platform for the organic electronics display devices, especially electrochromic displays, to permit a thorough study. The hardware part of platform permits the measuring of different electric and optical parameters. Charging/discharging a display element presents high interest for optimal driving circuitry. In this sense, the corresponding waveforms are presented. The contrast of the display is also measured for different operation conditions as driving voltage levels and duration. The effect of temperature on electrical and optical parameters (contrast) of the display will be also presented.

  20. Evaluation of anti-glare applications for a tactical helmet-mounted display

    Science.gov (United States)

    Roll, Jason L.; Trew, Noel J. M.; Geis, Matthew R.; Havig, Paul R.

    2011-06-01

    Non see-through, monocular helmet mounted displays (HMDs) provide warfighters with unprecedented amounts of information at a glance. The US Air Force recognizes their usefulness, and has included such an HMD as part of a kit for ground-based, Battlefield Airmen. Despite their many advantages, non see-through HMDs occlude a large portion of the visual field when worn as designed, directly in front of the eye. To address this limitation, operators have chosen to wear it just above the cheek, angled up toward the eye. However, wearing the HMD in this position exposes the display to glare, causing a potential viewing problem. In order to address this problem, we tested several film and HMD hood applications for their effect on glare. The first experiment objectively examined the amount of light reflected off the display with each application in a controlled environment. The second experiment used human participants to subjectively evaluate display readability/legibility with each film and HMD hood covering under normal office lighting and under a simulated sunlight condition. In this test paradigm, participants had to correctly identify different icons on a map and different words on a white background. Our results indicate that though some applications do reduce glare, they do not significantly improve the HMD's readability/legibility compared with an uncovered screen. This suggests that these post-production modifications will not completely solve this problem and underscores the importance of employing a user-centered approach early in the design cycle to determine an operator's use-case before manufacturing an HMD for a particular user community.

  1. The VLT Real Time Display

    Science.gov (United States)

    Herlin, T.; Brighton, A.; Biereichel, P.

    The VLT Real-Time Display (RTD) software was developed in order to support image display in real-time, providing a tool for users to display video like images from a camera or detector as fast as possible on an X-Server. The RTD software is implemented as a package providing a Tcl/Tk image widget written in C++ and an independent image handling library and can be used as a building block, adding display capabilities to dedicated VLT control applications. The RTD widget provides basic image display functionality like: panning, zooming, color scaling, colormaps, intensity changes, pixel query, overlaying of line graphics. A large set of assisting widgets, e.g., colorbar, zoom window, spectrum plot are provided to enable the building of image applications. The support for real-time is provided by an RTD image event mechanism used for camera or detector subsystems to pass images to the RTD widget. Image data are passed efficiently via shared memory. This paper describes the architecture of the RTD software and summarizes the features provided by RTD.

  2. Phosphors for flat panel emissive displays

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, M.T.; Walko, R.J.; Phillips, M.L.F.

    1995-07-01

    An overview of emissive display technologies is presented. Display types briefly described include: cathode ray tubes (CRTs), field emission displays (FEDs), electroluminescent displays (ELDs), and plasma display panels (PDPs). The critical role of phosphors in further development of the latter three flat panel emissive display technologies is outlined. The need for stable, efficient red, green, and blue phosphors for RGB fall color displays is emphasized.

  3. BES Monitoring & Displaying System

    Institute of Scientific and Technical Information of China (English)

    MengWANG; BingyunZHANG; 等

    2001-01-01

    BES1 Monitoring & Displaying System(BESMDS)is projected to monitor and display the running status of DAQ and Slow Control systems of BES through the Web for worldwide accessing.It provides a real-time remote means of monitoring as well as an approach to study the environmental influence upon physical data taking.The system collects real-time data separately from BES online subsystems by network sockets and stores the data into a database.People can access the system through its web site.which retrieves data on request from the database and can display results in dynamically created images.Its web address in http:// besmds,ihep.ac.cn/

  4. Engineering antibodies by yeast display.

    Science.gov (United States)

    Boder, Eric T; Raeeszadeh-Sarmazdeh, Maryam; Price, J Vincent

    2012-10-15

    Since its first application to antibody engineering 15 years ago, yeast display technology has been developed into a highly potent tool for both affinity maturing lead molecules and isolating novel antibodies and antibody-like species. Robust approaches to the creation of diversity, construction of yeast libraries, and library screening or selection have been elaborated, improving the quality of engineered molecules and certainty of success in an antibody engineering campaign and positioning yeast display as one of the premier antibody engineering technologies currently in use. Here, we summarize the history of antibody engineering by yeast surface display, approaches used in its application, and a number of examples highlighting the utility of this method for antibody engineering.

  5. PROGRAMMABLE DISPLAY PUSHBUTTON LEGEND EDITOR

    Science.gov (United States)

    Busquets, A. M.

    1994-01-01

    The Programmable Display Pushbutton (PDP) is a pushbutton device available from Micro Switch which has a programmable 16 x 35 matrix of LEDs on the pushbutton surface. Any desired legends can be displayed on the PDPs, producing user-friendly applications which greatly reduce the need for dedicated manual controls. Because the PDP can interact with the operator, it can call for the correct response before transmitting its next message. It is both a simple manual control and a sophisticated programmable link between the operator and the host system. The Programmable Display Pushbutton Legend Editor, PDPE, is used to create the LED displays for the pushbuttons. PDPE encodes PDP control commands and legend data into message byte strings sent to a Logic Refresh and Control Unit (LRCU). The LRCU serves as the driver for a set of four PDPs. The legend editor (PDPE) transmits to the LRCU user specified commands that control what is displayed on the LED face of the individual pushbuttons. Upon receiving a command, the LRCU transmits an acknowledgement that the message was received and executed successfully. The user then observes the effect of the command on the PDP displays and decides whether or not to send the byte code of the message to a data file so that it may be called by an applications program. The PDPE program is written in FORTRAN for interactive execution. It was developed on a DEC VAX 11/780 under VMS. It has a central memory requirement of approximately 12800 bytes. It requires four Micro Switch PDPs and two RS-232 VAX 11/780 terminal ports. The PDPE program was developed in 1985.

  6. Analysis of an autostereoscopic display: the perceptual range of the three-dimensional visual fields and saliency of static depth cues

    Science.gov (United States)

    Havig, Paul; McIntire, John; McGruder, Rhoshonda

    2006-02-01

    Autostereoscopic displays offer users the unique ability to view 3-dimensional (3D) imagery without special eyewear or headgear. However, the users' head must be within limited "eye boxes" or "viewing zones". Little research has evaluated these viewing zones from a human-in-the-loop, subjective perspective. In the first study, twelve participants evaluated the quality and amount of perceived 3D images. We manipulated distance from observer, viewing angle, and stimuli to characterize the perceptual viewing zones. The data was correlated with objective measures to investigate the amount of concurrence between the objective and subjective measures. In a second study we investigated the benefit of generating stimuli that take advantage of monocular depth cues. The purpose of this study was to determine if one could develop optimal stimuli that would give rise to the greatest 3D effect with off-axis viewing angles. Twelve participants evaluated the quality of depth perception of various stimuli each made up of one monocular depth cue (i.e., linear perspective, occlusion, haze, size, texture, and horizon). Viewing zone analysis is discussed in terms of optimal viewing distances and viewing angles. Stimuli properties are discussed in terms of image complexity and depth cues present.

  7. Computational multi-projection display.

    Science.gov (United States)

    Moon, Seokil; Park, Soon-Gi; Lee, Chang-Kun; Cho, Jaebum; Lee, Seungjae; Lee, Byoungho

    2016-04-18

    A computational multi-projection display is proposed by employing a multi-projection system combining with compressive light field displays. By modulating the intensity of light rays from a spatial light modulator inside a single projector, the proposed system can offer several compact views to observer. Since light rays are spread to all directions, the system can provide flexible positioning of viewpoints without stacking projectors in vertical direction. Also, if the system is constructed properly, it is possible to generate view images with inter-pupillary gap and satisfy the super multi-view condition. We explain the principle of the proposed system and verify its feasibility with simulations and experimental results.

  8. Analysis the macular ganglion cell complex thickness in monocular strabismic amblyopia patients by Fourier-domain OCT

    Directory of Open Access Journals (Sweden)

    Hong-Wei Deng

    2014-11-01

    Full Text Available AIM: To detect the macular ganglion cell complex thickness in monocular strabismus amblyopia patients, in order to explore the relationship between the degree of amblyopia and retinal ganglion cell complex thickness, and found out whether there is abnormal macular ganglion cell structure in strabismic amblyopia. METHODS: Using a fourier-domain optical coherence tomography(FD-OCTinstrument iVue®(Optovue Inc, Fremont, CA, Macular ganglion cell complex(mGCCthickness was measured and statistical the relation rate with the best vision acuity correction was compared Gman among 26 patients(52 eyesincluded in this study. RESULTS: The mean thickness of the mGCC in macular was investigated into three parts: centrial, inner circle(3mmand outer circle(6mm. The mean thicknesses of mGCC in central, inner and outer circle was 50.74±21.51μm, 101.4±8.51μm, 114.2±9.455μm in the strabismic amblyopia eyes(SAE, and 43.79±11.92μm,92.47±25.01μm, 113.3±12.88μm in the contralateral sound eyes(CSErespectively. There was no statistically significant difference among the eyes(P>0.05. But the best corrected vision acuity had a good correlation rate between mGcc thicknesses, which was better relative for the lower part than the upper part.CONCLUSION:There is a relationship between the amblyopia vision acuity and the mGCC thickness. Although there has not statistically significant difference of the mGCC thickness compared with the SAE and CSE. To measure the macular center mGCC thickness in clinic may understand the degree of amblyopia.

  9. Layer- and cell-type-specific subthreshold and suprathreshold effects of long-term monocular deprivation in rat visual cortex.

    Science.gov (United States)

    Medini, Paolo

    2011-11-23

    Connectivity and dendritic properties are determinants of plasticity that are layer and cell-type specific in the neocortex. However, the impact of experience-dependent plasticity at the level of synaptic inputs and spike outputs remains unclear along vertical cortical microcircuits. Here I compared subthreshold and suprathreshold sensitivity to prolonged monocular deprivation (MD) in rat binocular visual cortex in layer 4 and layer 2/3 pyramids (4Ps and 2/3Ps) and in thick-tufted and nontufted layer 5 pyramids (5TPs and 5NPs), which innervate different extracortical targets. In normal rats, 5TPs and 2/3Ps are the most binocular in terms of synaptic inputs, and 5NPs are the least. Spike responses of all 5TPs were highly binocular, whereas those of 2/3Ps were dominated by either the contralateral or ipsilateral eye. MD dramatically shifted the ocular preference of 2/3Ps and 4Ps, mostly by depressing deprived-eye inputs. Plasticity was profoundly different in layer 5. The subthreshold ocular preference shift was sevenfold smaller in 5TPs because of smaller depression of deprived inputs combined with a generalized loss of responsiveness, and was undetectable in 5NPs. Despite their modest ocular dominance change, spike responses of 5TPs consistently lost their typically high binocularity during MD. The comparison of MD effects on 2/3Ps and 5TPs, the main affected output cells of vertical microcircuits, indicated that subthreshold plasticity is not uniquely determined by the initial degree of input binocularity. The data raise the question of whether 5TPs are driven solely by 2/3Ps during MD. The different suprathreshold plasticity of the two cell populations could underlie distinct functional deficits in amblyopia.

  10. Monocular denervation of visual nuclei modulates APP processing and sAPPα production: A possible role on neural plasticity.

    Science.gov (United States)

    Vasques, Juliana Ferreira; Heringer, Pedro Vinícius Bastos; Gonçalves, Renata Guedes de Jesus; Campello-Costa, Paula; Serfaty, Claudio Alberto; Faria-Melibeu, Adriana da Cunha

    2017-08-01

    Amyloid precursor protein (APP) is essential to physiological processes such as synapse formation and neural plasticity. Sequential proteolysis of APP by beta- and gamma-secretases generates amyloid-beta peptide (Aβ), the main component of senile plaques in Alzheimer Disease. Alternative APP cleavage by alpha-secretase occurs within Aβ domain, releasing soluble α-APP (sAPPα), a neurotrophic fragment. Among other functions, sAPPα is important to synaptogenesis, neural survival and axonal growth. APP and sAPPα levels are increased in models of neuroplasticity, which suggests an important role for APP and its metabolites, especially sAPPα, in the rearranging brain. In this work we analyzed the effects of monocular enucleation (ME), a classical model of lesion-induced plasticity, upon APP content, processing and also in secretases levels. Besides, we addressed whether α-secretase activity is crucial for retinotectal remodeling after ME. Our results showed that ME induced a transient reduction in total APP content. We also detected an increase in α-secretase expression and in sAPP production concomitant with a reduction in Aβ and β-secretase contents. These data suggest that ME facilitates APP processing by the non-amyloidogenic pathway, increasing sAPPα levels. Indeed, the pharmacological inhibition of α-secretase activity reduced the axonal sprouting of ipsilateral retinocollicular projections from the intact eye after ME, suggesting that sAPPα is necessary for synaptic structural rearrangement. Understanding how APP processing is regulated under lesion conditions may provide new insights into APP physiological role on neural plasticity. Copyright © 2017 ISDN. Published by Elsevier Ltd. All rights reserved.

  11. Display standards for commercial flight decks

    Science.gov (United States)

    Lamberth, Larry S.; Penn, Cecil W.

    1994-06-01

    SAE display standards are used as guidelines for certifying commercial airborne electronic displays. The SAE document generation structure and approval process is described. The SAE committees that generate display standards are described. Three SAE documents covering flat panel displays (AS-8034, ARP-4256, and ARP-4260) are discussed with their current status. Head-Up Display documents are also in work.

  12. Display Apple M7649Zm

    CERN Multimedia

    2001-01-01

    It was Designed for the Power Mac G4. This Apple studio display gives you edge-to-edge distortion-free images. With more than 16.7 million colors and 1,280 x 1,024 dpi resolution, you view brilliant and bright images on this Apple 17-inch monitor.

  13. Book Display as Adult Service.

    Science.gov (United States)

    Moore, Matthew S.

    1997-01-01

    Defines book display as an adult service as choosing and positioning adult books from the library collection to increase their circulation. The author contrasts bookstore arrangement for sales versus library arrangement for access, including contrasting missions, genre grouping, weeding, problems, and dimensions. (Author/LRW)

  14. Real Time Sonic Boom Display

    Science.gov (United States)

    Haering, Ed

    2014-01-01

    This presentation will provide general information about sonic boom mitigation technology to the public in order to supply information to potential partners and licensees. The technology is a combination of flight data, atmospheric data and terrain information implemented into a control room real time display for flight planning. This research is currently being performed and as such, any results and conclusions are ongoing.

  15. Graphics Display of Foreign Scripts.

    Science.gov (United States)

    Abercrombie, John R.

    1987-01-01

    Describes Graphics Project for Foreign Language Learning at the University of Pennsylvania, which has developed ways of displaying foreign scripts on microcomputers. Character design on computer screens is explained; software for graphics, printing, and language instruction is discussed; and a text editor is described that corrects optically…

  16. Verbal Modification via Visual Display

    Science.gov (United States)

    Richmond, Edmun B.; Wallace-Childers, La Donna

    1977-01-01

    The inability of foreign language students to produce acceptable approximations of new vowel sounds initiated a study to devise a real-time visual display system whereby the students could match vowel production to a visual pedagogical model. The system used amateur radio equipment and a standard oscilloscope. (CHK)

  17. Colour displays for categorical images

    NARCIS (Netherlands)

    Glasbey, C.; Heijden, van der G.W.A.M.; Toh, V.F.K.; Gray, A.J.

    2007-01-01

    We propose a method for identifying a set of colours for displaying 2D and 3D categorical images when the categories are unordered labels. The principle is to find maximally distinct sets of colours. We either generate colours sequentially, to maximize the dissimilarity or distance between a new col

  18. Autostereoscopic display with eye tracking

    Science.gov (United States)

    Tomono, Takao; Hoon, Kyung; Ha, Yong Soo; Kim, Sung-Sik; Son, Jung-Young

    2002-05-01

    Auto-stereoscopic 21-inch display with eye tracking having wide viewing zone and bright image was fabricated. The image of display is projected to retinal through several optical components. We calculated optical system for wider viewing zone by using Inverse-Ray Trace Method. The viewing zone of first model is 155mm (theoretical value: 161mm). We could widen viewing zone by controlling paraxial radius of curvature of spherical mirror, the distance between lenses and so on. The viewing zone of second model is 208mm. We used two spherical mirrors to obtain twice brightness. We applied eye-tracking system to the display system. Eye recognition is based on neural network card based on ZICS technology. We fabricated Auto-stereoscopic 21-inch display with eye tracking. We measured viewing zone based on illumination area. The viewing zone was 206mm, which was close to theoretical value. We could get twice brightness also. We could see 3D image according to position without headgear.

  19. Crystal ball single event display

    Energy Technology Data Exchange (ETDEWEB)

    Grosnick, D.; Gibson, A. [Valparaiso Univ., IN (United States). Dept. of Physics and Astronomy; Allgower, C. [Argonne National Lab., IL (United States). High Energy Physics Div.; Alyea, J. [Valparaiso Univ., IN (United States). Dept. of Physics and Astronomy]|[Argonne National Lab., IL (United States). High Energy Physics Div.

    1997-10-15

    The Single Event Display (SED) is a routine that is designed to provide information graphically about a triggered event within the Crystal Ball. The SED is written entirely in FORTRAN and uses the CERN-based HICZ graphing package. The primary display shows the amount of energy deposited in each of the NaI crystals on a Mercator-like projection of the crystals. Ten different shades and colors correspond to varying amounts of energy deposited within a crystal. Information about energy clusters is displayed on the crystal map by outlining in red the thirteen (or twelve) crystals contained within a cluster and assigning each cluster a number. Additional information about energy clusters is provided in a series of boxes containing useful data about the energy distribution among the crystals within the cluster. Other information shown on the event display include the event trigger type and data about {pi}{sup o}`s and {eta}`s formed from pairs of clusters as found by the analyzer. A description of the major features is given, along with some information on how to install the SED into the analyzer.

  20. GridOrbit public display

    DEFF Research Database (Denmark)

    Ramos, Juan David Hincapie; Tabard, Aurélien; Bardram, Jakob

    2010-01-01

    We introduce GridOrbit, a public awareness display that visualizes the activity of a community grid used in a biology laboratory. This community grid executes bioin-formatics algorithms and relies on users to donate CPU cycles to the grid. The goal of GridOrbit is to create a shared awareness about...

  1. Information retrieval and display system

    Science.gov (United States)

    Groover, J. L.; King, W. L.

    1977-01-01

    Versatile command-driven data management system offers users, through simplified command language, a means of storing and searching data files, sorting data files into specified orders, performing simple or complex computations, effecting file updates, and printing or displaying output data. Commands are simple to use and flexible enough to meet most data management requirements.

  2. Display Sharing: An Alternative Paradigm

    Science.gov (United States)

    Brown, Michael A.

    2010-01-01

    The current Johnson Space Center (JSC) Mission Control Center (MCC) Video Transport System (VTS) provides flight controllers and management the ability to meld raw video from various sources with telemetry to improve situational awareness. However, maintaining a separate infrastructure for video delivery and integration of video content with data adds significant complexity and cost to the system. When considering alternative architectures for a VTS, the current system's ability to share specific computer displays in their entirety to other locations, such as large projector systems, flight control rooms, and back supporting rooms throughout the facilities and centers must be incorporated into any new architecture. Internet Protocol (IP)-based systems also support video delivery and integration. IP-based systems generally have an advantage in terms of cost and maintainability. Although IP-based systems are versatile, the task of sharing a computer display from one workstation to another can be time consuming for an end-user and inconvenient to administer at a system level. The objective of this paper is to present a prototype display sharing enterprise solution. Display sharing is a system which delivers image sharing across the LAN while simultaneously managing bandwidth, supporting encryption, enabling recovery and resynchronization following a loss of signal, and, minimizing latency. Additional critical elements will include image scaling support, multi -sharing, ease of initial integration and configuration, integration with desktop window managers, collaboration tools, host and recipient controls. This goal of this paper is to summarize the various elements of an IP-based display sharing system that can be used in today's control center environment.

  3. Enhanced perception of terrain hazards in off-road path choice: stereoscopic 3D versus 2D displays

    Science.gov (United States)

    Merritt, John O.; CuQlock-Knopp, V. Grayson; Myles, Kimberly

    1997-06-01

    Off-road mobility at night is a critical factor in modern military operations. Soldiers traversing off-road terrain, both on foot and in combat vehicles, often use 2D viewing devices (such as a driver's thermal viewer, or biocular or monocular night-vision goggles) for tactical mobility under low-light conditions. Perceptual errors can occur when 2D displays fail to convey adequately the contours of terrain. Some off-road driving accidents have been attributed to inadequate perception of terrain features due to using 2D displays (which do not provide binocular-parallax cues to depth perception). In this study, photographic images of terrain scenes were presented first in conventional 2D video, and then in stereoscopic 3D video. The percentage of possible correct answers for 2D and 3D were: 2D pretest equals 52%, 3D pretest equals 80%, 2D posttest equals 48%, 3D posttest equals 78%. Other recent studies conducted at the US Army Research Laboratory's Human Research and Engineering Directorate also show that stereoscopic 3D displays can significantly improve visual evaluation of terrain features, and thus may improve the safety and effectiveness of military off-road mobility operation, both on foot and in combat vehicles.

  4. Visuotactile Integration for Depth Perception in Augmented Reality

    NARCIS (Netherlands)

    Rosa, N.E.; Hürst, W.O.; Werkhoven, P.J.; Veltkamp, R.C.

    Augmented reality applications using stereo head-mounted displays are not capable of perfectly blending real and virtual objects. For example, depth in the real world is perceived through cues such as accommodation and vergence. However, in stereo head-mounted displays these cues are disconnected

  5. Assessing Binocular Advantage in Aided Vision

    Science.gov (United States)

    2014-06-01

    Night /day Imaging Technologies, ANIT, Head Mounted Display, Night Vision Goggle , Aided Vision , Binocular...advantage of binocularity in aided vision . Key Words: Head Mounted Display, Night Vision Goggle , Stereopsis, Modulation Transfer Function...veridical perception allows the operator to navigate and manipulate the environment in a natural and efficient manner. Night vision goggles (NVGs),

  6. Displays for future intermediate UAV

    Science.gov (United States)

    Desjardins, Daniel; Metzler, James; Blakesley, David; Rister, Courtney; Nuhu, Abdul-Razak

    2008-04-01

    The Dedicated Autonomous Extended Duration Airborne Long-range Utility System (DAEDALUS) is a prototype Unmanned Aerial Vehicle (UAV) that won the 2007 AFRL Commander's Challenge. The purpose of the Commander's Challenge was to find an innovative solution to urgent warfighter needs by designing a UAV with increased persistence for tactical employment of sensors and communication systems. DAEDALUS was chosen as a winning prototype by AFRL, AFMC and SECAF. Follow-on units are intended to fill an intermediate role between currently fielded Tier I and Tier II UAV's. The UAV design discussed in this paper, including sensors and displays, will enter Phase II for Rapid Prototype Development with the intent of developing the design for eventual production. This paper will discuss the DAEDALUS UAV prototype system, with particular focus on its communications, to include the infrared sensor and electro-optical camera, but also displays, specifically man-portable.

  7. Characterization of the rotating display.

    Science.gov (United States)

    Keyes, J W; Fahey, F H; Harkness, B A; Eggli, D F; Balseiro, J; Ziessman, H A

    1988-09-01

    The rotating display is a useful method for reviewing single photon emission computed tomography (SPECT) data. This study evaluated the requirements for a subjectively pleasing and useful implementation of this technique. Twelve SPECT data sets were modified and viewed by several observers who recorded the minimum framing rates for apparent smooth rotation, 3D effect, effects of image size, and other parameters. The results showed that a minimum of 16 frames was needed for a useful display. Smaller image sizes and more frames were preferred. The recommended minimal framing rate for a 64-frame study is 16-17 frames per second and for a 32-frame study, 12-13 frames per second. Other enhancements also were useful.

  8. Interactive display of polygonal data

    Energy Technology Data Exchange (ETDEWEB)

    Wood, P.M.

    1977-10-01

    Interactive computer graphics is an excellent approach to many types of applications. It is an exciting method of doing geographic analysis when desiring to rapidly examine existing geographically related data or to display specially prepared data and base maps for publication. One such program is the interactive thematic mapping system called CARTE, which combines polygonal base maps with statistical data to produce shaded maps using a variety of shading symbolisms on a variety of output devices. A polygonal base map is one where geographic entities are described by points, lines, or polygons. It is combined with geocoded data to produce special subject or thematic maps. Shading symbolisms include texture shading for areas, varying widths for lines, and scaled symbols for points. Output devices include refresh and storage CRTs and auxiliary Calcomp or COM hardcopy. The system is designed to aid in the quick display of spatial data and in detailed map design.

  9. Game engines and immersive displays

    Science.gov (United States)

    Chang, Benjamin; Destefano, Marc

    2014-02-01

    While virtual reality and digital games share many core technologies, the programming environments, toolkits, and workflows for developing games and VR environments are often distinct. VR toolkits designed for applications in visualization and simulation often have a different feature set or design philosophy than game engines, while popular game engines often lack support for VR hardware. Extending a game engine to support systems such as the CAVE gives developers a unified development environment and the ability to easily port projects, but involves challenges beyond just adding stereo 3D visuals. In this paper we outline the issues involved in adapting a game engine for use with an immersive display system including stereoscopy, tracking, and clustering, and present example implementation details using Unity3D. We discuss application development and workflow approaches including camera management, rendering synchronization, GUI design, and issues specific to Unity3D, and present examples of projects created for a multi-wall, clustered, stereoscopic display.

  10. Proof nets for display logic

    CERN Document Server

    Moot, Richard

    2007-01-01

    This paper explores several extensions of proof nets for the Lambek calculus in order to handle the different connectives of display logic in a natural way. The new proof net calculus handles some recent additions to the Lambek vocabulary such as Galois connections and Grishin interactions. It concludes with an exploration of the generative capacity of the Lambek-Grishin calculus, presenting an embedding of lexicalized tree adjoining grammars into the Lambek-Grishin calculus.

  11. Modern Display Technologies and Applications

    Science.gov (United States)

    1982-01-01

    conventional tubes, LSI circuitry offers the possibility of correcting some of the deficiencies in electron-optic perform- ance and may lead to acceptable...certain ceramic materials such as PLZT (lead lanthanum zirconate titanate) can be utilized for display applications. PLZT is transparent in the visible...consuming power (3.8.12). 3.8.4.2 State of development, Magnetic particles have been made of polyethylene with powdered Strontium ferrite as a filler

  12. Striations in Plasma Display Panel

    Institute of Scientific and Technical Information of China (English)

    OUYANG Ji-Ting; CAO Jing; MIAO Jin-Song

    2005-01-01

    @@ The phenomenon of striation has been investigated experimentally in a macroscopic ac-plasma display panel (PDP). The relationship between the characteristics of striation and the operation conditions including voltage, frequency, rib, and electrode configuration, etc is obtained experimentally. The origin of the striations is considered to be the ionization waves in the transient positive column near the dielectric surface in the anode area during the discharge, and the perturbation is caused by resonance kinetic effects in inert gas.

  13. Multiview synthesis for autostereoscopic displays

    Science.gov (United States)

    Dane, Gökçe.; Bhaskaran, Vasudev

    2013-09-01

    Autostereoscopic (AS) displays spatially multiplex multiple views, providing a more immersive experience by enabling users to view the content from different angles without the need of 3D glasses. Multiple views could be captured from multiple cameras at different orientations, however this could be expensive, time consuming and not applicable to some applications. The goal of multiview synthesis in this paper is to generate multiple views from a stereo image pair and disparity map by using various video processing techniques including depth/disparity map processing, initial view interpolation, inpainting and post-processing. We specifically emphasize the need for disparity processing when there is no depth information is available that is associated with the 2D data and we propose a segmentation based disparity processing algorithm to improve disparity map. Furthermore we extend the texture based 2D inpainting algorithm to 3D and further improve the hole-filling performance of view synthesis. The benefit of each step of the proposed algorithm is demonstrated with comparison to state of the art algorithms in terms of visual quality and PSNR metric. Our system is evaluated in an end-to-end multi view synthesis framework where only stereo image pair is provided as input to the system and 8 views are outputted and displayed in 8-view Alioscopy AS display.

  14. Percepção monocular da profundidade ou relevo na ilusão da máscara côncava na esquizofrenia

    Directory of Open Access Journals (Sweden)

    Arthur Alves

    2014-03-01

    Full Text Available Este trabalho foi desenvolvido com o propósito de investigar a percepção monocular da profundidade ou relevo da máscara côncava por 29 indivíduos saudáveis, sete indivíduos com esquizofrenia sob uso de antipsicótico por um período inferior ou igual a quatro semanas e 29 sob uso de antipsicótico por um período superior a quatro semanas. Os três grupos classificaram o reverso de uma máscara policromada em duas situações de iluminação, por cima e por baixo. Os resultados indicaram que a maioria dos indivíduos com esquizofrenia inverteu a profundidade da máscara côncava na condição de observação monocular e perceberam-na como convexa, sendo, portanto, suscetíveis à ilusão da máscara côncava. Os indivíduos com esquizofrenia sob uso de medicação antipsicótica pelo período superior a quatro semanas estimaram a convexidade da máscara côncava iluminada por cima em menor comprimento comparados aos indivíduos saudáveis.

  15. Gestures to Intuitively Control Large Displays

    NARCIS (Netherlands)

    Fikkert, F.W.; Vet, van der P.E.; Rauwerda, H.; Breit, T.; Nijholt, A.; Sales Dias, M.; Gibet, S.; Wanderley, M.W.; Bastos, R.

    2009-01-01

    Large displays are highly suited to support discussions in empirical science. Such displays can display project results on a large digital surface to feed the discussion. This paper describes our approach to closely involve multidisciplinary omics scientists in the design of an intuitive display con

  16. 27 CFR 6.55 - Display service.

    Science.gov (United States)

    2010-04-01

    ... Distribution Service § 6.55 Display service. Industry member reimbursements to retailers for setting up product or other displays constitutes paying the retailer for rendering a display service within the meaning... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Display service. 6.55...

  17. Review of Defense Display Research Programs

    Science.gov (United States)

    2001-01-01

    Programs Flat Panel Autostereoscopic N-perspective 3D High Definition DMD Digital Projector Light Piping & Quantum Cavity Displays Solid State Laser...Megapixel Displays • Size Commonality • 67 % Weight Reduction • > 200 sq. in. per Display 20-20 Vision Simulators True 3D , sparse symbols Foldable Display...megapixel 2D and True 3D Display Technology 25M & T3D FY02-FY06 New service thrusts

  18. Recent Trend in Development of Olfactory Displays

    Science.gov (United States)

    Yanagida, Yasuyuki

    An olfactory display is a device that generates scented air with desired concentration of aroma, and delivers it to the user's olfactory organ. In this article, the nature of olfaction is briefly described from the view point of how to configure olfactory displays. Next, component technologies to compose olfactory displays, i.e., making scents and delivering scents, are categorized. Several existing olfactory display systems are introduced to show the current status of research and development of olfactory displays.

  19. Optical display for radar sensing

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Willey, Jefferson; Landa, Joseph; Hsieh, Minder; Larsen, Louis V.; Krzywicki, Alan T.; Tran, Binh Q.; Hoekstra, Philip; Dillard, John T.; Krapels, Keith A.; Wardlaw, Michael; Chu, Kai-Dee

    2015-05-01

    Boltzmann headstone S = kB Log W turns out to be the Rosette stone for Greek physics translation optical display of the microwave sensing hieroglyphics. The LHS is the molecular entropy S measuring the degree of uniformity scattering off the sensing cross sections. The RHS is the inverse relationship (equation) predicting the Planck radiation spectral distribution parameterized by the Kelvin temperature T. Use is made of the conservation energy law of the heat capacity of Reservoir (RV) change T Δ S = -ΔE equals to the internal energy change of black box (bb) subsystem. Moreover, an irreversible thermodynamics Δ S > 0 for collision mixing toward totally larger uniformity of heat death, asserted by Boltzmann, that derived the so-called Maxwell-Boltzmann canonical probability. Given the zero boundary condition black box, Planck solved a discrete standing wave eigenstates (equation). Together with the canonical partition function (equation) an average ensemble average of all possible internal energy yielded the celebrated Planck radiation spectral (equation) where the density of states (equation). In summary, given the multispectral sensing data (equation), we applied Lagrange Constraint Neural Network (LCNN) to solve the Blind Sources Separation (BSS) for a set of equivalent bb target temperatures. From the measurements of specific value, slopes and shapes we can fit a set of Kelvin temperatures T's for each bb targets. As a result, we could apply the analytical continuation for each entropy sources along the temperature-unique Planck spectral curves always toward the RGB color temperature display for any sensing probing frequency.

  20. Simulated monitor display for CCTV

    Energy Technology Data Exchange (ETDEWEB)

    Steele, B.J.

    1982-01-01

    Two computer programs have been developed which generate a two-dimensional graphic perspective of the video output produced by a Closed Circuit Television (CCTV) camera. Both programs were primarily written to produce a graphic display simulating the field-of-view (FOV) of a perimeter assessment system as seen on a CCTV monitor. The original program was developed for use on a Tektronix 4054 desktop computer; however, the usefulness of this graphic display program led to the development of a similar program for a Hewlett-Packard 9845B desktop computer. After entry of various input parameters, such as, camera lens and orientation, the programs automatically calculate and graphically plot the locations of various items, e.g., fences, an assessment zone, running men, and intrusion detection sensors. Numerous special effects can be generated to simulate such things as roads, interior walls, or sides of buildings. Other objects can be digitized and entered into permanent memory similar to the running men. With this type of simulated monitor perspective, proposed camera locations with respect to fences and a particular assessment zone can be rapidly evaluated without the costly time delays and expenditures associated with field evaluation.