WorldWideScience

Sample records for monocular condition display

  1. Monocular accommodation condition in 3D display types through geometrical optics

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong-Wook; Park, Min-Chul; Son, Jung-Young

    2007-09-01

    Eye fatigue or strain phenomenon in 3D display environment is a significant problem for 3D display commercialization. The 3D display systems like eyeglasses type stereoscopic or auto-stereoscopic multiview, Super Multi-View (SMV), and Multi-Focus (MF) displays are considered for detail calculation about satisfaction level of monocular accommodation by geometrical optics calculation means. A lens with fixed focal length is used for experimental verification about numerical calculation of monocular defocus effect caused by accommodation at three different depths. And the simulation and experiment results consistently show relatively high level satisfaction about monocular accommodation at MF display condition. Additionally, possibility of monocular depth perception, 3D effect, at monocular MF display is discussed.

  2. Monocular 3D display system for presenting correct depth

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-10-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  3. Light-weight monocular display unit for 3D display using polypyrrole film actuator

    Science.gov (United States)

    Sakamoto, Kunio; Ohmori, Koji

    2010-10-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator system. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a polypyrrole linear actuator.

  4. Visibility of monocular symbology in transparent head-mounted display applications

    Science.gov (United States)

    Winterbottom, M.; Patterson, R.; Pierce, B.; Gaska, J.; Hadley, S.

    2015-05-01

    With increased reliance on head-mounted displays (HMDs), such as the Joint Helmet Mounted Cueing System and F-35 Helmet Mounted Display System, research concerning visual performance has also increased in importance. Although monocular HMDs have been used successfully for many years, a number of authors have reported significant problems with their use. Certain problems have been attributed to binocular rivalry when differing imagery is presented to the two eyes. With binocular rivalry, the visibility of the images in the two eyes fluctuates, with one eye's view becoming dominant, and thus visible, while the other eye's view is suppressed, which alternates over time. Rivalry is almost certainly created when viewing an occluding monocular HMD. For semi-transparent monocular HMDs, however, much of the scene is binocularly fused, with additional imagery superimposed in one eye. Binocular fusion is thought to prevent rivalry. The present study was designed to investigate differences in visibility between monocularly and binocularly presented symbology at varying levels of contrast and while viewing simulated flight over terrain at various speeds. Visibility was estimated by measuring the presentation time required to identify a test probe (tumbling E) embedded within other static symbology. Results indicated that there were large individual differences, but that performance decreased with decreased test probe contrast under monocular viewing relative to binocular viewing conditions. Rivalry suppression may reduce visibility of semi-transparent monocular HMD imagery. However, factors, such as contrast sensitivity, masking, and conditions such as monofixation, will be important to examine in future research concerning visibility of HMD imagery.

  5. Monocular 3D display unit using soft actuator for parallax image shift

    Science.gov (United States)

    Sakamoto, Kunio; Kodama, Yuuki

    2010-11-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator system. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a soft linear actuator made of a polypyrrole film.

  6. Development of monocular and binocular multi-focus 3D display systems using LEDs

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong-Wook; Son, Jung-Young; Kwon, Yong-Moo

    2008-04-01

    Multi-focus 3D display systems are developed and a possibility about satisfaction of eye accommodation is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving the multi-focus function, we developed 3D display systems for one eye and both eyes, which can satisfy accommodation to displayed virtual objects within defined depth. The monocular accommodation and the binocular convergence 3D effect of the system are tested and a proof of the satisfaction of the accommodation and experimental result of the binocular 3D fusion are given as results by using the proposed 3D display systems.

  7. Monocular display unit for 3D display with correct depth perception

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  8. Comparative evaluation of monocular augmented-reality display for surgical microscopes.

    Science.gov (United States)

    Rodriguez Palma, Santiago; Becker, Brian C; Lobes, Louis A; Riviere, Cameron N

    2012-01-01

    Medical augmented reality has undergone much development recently. However, there is a lack of studies quantitatively comparing the different display options available. This paper compares the effects of different graphical overlay systems in a simple micromanipulation task with "soft" visual servoing. We compared positioning accuracy in a real-time visually-guided task using Micron, an active handheld tremor-canceling microsurgical instrument, using three different displays: 2D screen, 3D screen, and microscope with monocular image injection. Tested with novices and an experienced vitreoretinal surgeon, display of virtual cues in the microscope via an augmented reality injection system significantly decreased 3D error (p < 0.05) compared to the 2D and 3D monitors when confounding factors such as magnification level were normalized.

  9. Comparison of Subjective Refraction under Binocular and Monocular Conditions in Myopic Subjects.

    Science.gov (United States)

    Kobashi, Hidenaga; Kamiya, Kazutaka; Handa, Tomoya; Ando, Wakako; Kawamorita, Takushi; Igarashi, Akihito; Shimizu, Kimiya

    2015-07-28

    To compare subjective refraction under binocular and monocular conditions, and to investigate the clinical factors affecting the difference in spherical refraction between the two conditions. We examined thirty eyes of 30 healthy subjects. Binocular and monocular refraction without cycloplegia was measured through circular polarizing lenses in both eyes, using the Landolt-C chart of the 3D visual function trainer-ORTe. Stepwise multiple regression analysis was used to assess the relations among several pairs of variables and the difference in spherical refraction in binocular and monocular conditions. Subjective spherical refraction in the monocular condition was significantly more myopic than that in the binocular condition (p refraction (p = 0.99). The explanatory variable relevant to the difference in spherical refraction between binocular and monocular conditions was the binocular spherical refraction (p = 0.032, partial regression coefficient B = 0.029) (adjusted R(2) = 0.230). No significant correlation was seen with other clinical factors. Subjective spherical refraction in the monocular condition was significantly more myopic than that in the binocular condition. Eyes with higher degrees of myopia are more predisposed to show the large difference in spherical refraction between these two conditions.

  10. Avoiding monocular artifacts in clinical stereotests presented on column-interleaved digital stereoscopic displays.

    Science.gov (United States)

    Serrano-Pedraza, Ignacio; Vancleef, Kathleen; Read, Jenny C A

    2016-11-01

    New forms of stereoscopic 3-D technology offer vision scientists new opportunities for research, but also come with distinct problems. Here we consider autostereo displays where the two eyes' images are spatially interleaved in alternating columns of pixels and no glasses or special optics are required. Column-interleaved displays produce an excellent stereoscopic effect, but subtle changes in the angle of view can increase cross talk or even interchange the left and right eyes' images. This creates several challenges to the presentation of cyclopean stereograms (containing structure which is only detectable by binocular vision). We discuss the potential artifacts, including one that is unique to column-interleaved displays, whereby scene elements such as dots in a random-dot stereogram appear wider or narrower depending on the sign of their disparity. We derive an algorithm for creating stimuli which are free from this artifact. We show that this and other artifacts can be avoided by (a) using a task which is robust to disparity-sign inversion-for example, a disparity-detection rather than discrimination task-(b) using our proposed algorithm to ensure that parallax is applied symmetrically on the column-interleaved display, and (c) using a dynamic stimulus to avoid monocular artifacts from motion parallax. In order to test our recommendations, we performed two experiments using a stereoacuity task implemented with a parallax-barrier tablet. Our results confirm that these recommendations eliminate the artifacts. We believe that these recommendations will be useful to vision scientists interested in running stereo psychophysics experiments using parallax-barrier and other column-interleaved digital displays.

  11. Depth of Monocular Elements in a Binocular Scene: The Conditions for da Vinci Stereopsis

    Science.gov (United States)

    Cook, Michael; Gillam, Barbara

    2004-01-01

    Quantitative depth based on binocular resolution of visibility constraints is demonstrated in a novel stereogram representing an object, visible to 1 eye only, and seen through an aperture or camouflaged against a background. The monocular region in the display is attached to the binocular region, so that the stereogram represents an object which…

  12. Depth of Monocular Elements in a Binocular Scene: The Conditions for da Vinci Stereopsis

    Science.gov (United States)

    Cook, Michael; Gillam, Barbara

    2004-01-01

    Quantitative depth based on binocular resolution of visibility constraints is demonstrated in a novel stereogram representing an object, visible to 1 eye only, and seen through an aperture or camouflaged against a background. The monocular region in the display is attached to the binocular region, so that the stereogram represents an object which…

  13. Visual task performance using a monocular see-through head-mounted display (HMD) while walking.

    Science.gov (United States)

    Mustonen, Terhi; Berg, Mikko; Kaistinen, Jyrki; Kawai, Takashi; Häkkinen, Jukka

    2013-12-01

    A monocular see-through head-mounted display (HMD) allows the user to view displayed information while simultaneously interacting with the surrounding environment. This configuration lets people use HMDs while they are moving, such as while walking. However, sharing attention between the display and environment can compromise a person's performance in any ongoing task, and controlling one's gait may add further challenges. In this study, the authors investigated how the requirements of HMD-administered visual tasks altered users' performance while they were walking. Twenty-four university students completed 3 cognitive tasks (high- and low-working memory load, visual vigilance) on an HMD while seated and while simultaneously performing a paced walking task in a controlled environment. The results show that paced walking worsened performance (d', reaction time) in all HMD-administered tasks, but visual vigilance deteriorated more than memory performance. The HMD-administered tasks also worsened walking performance (speed, path overruns) in a manner that varied according to the overall demands of the task. These results suggest that people's ability to process information displayed on an HMD may worsen while they are in motion. Furthermore, the use of an HMD can critically alter a person's natural performance, such as their ability to guide and control their gait. In particular, visual tasks that involve constant monitoring of the HMD should be avoided. These findings highlight the need for careful consideration of the type and difficulty of information that can be presented through HMDs while still letting the user achieve an acceptable overall level of performance in various contexts of use. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  14. Modeling Of A Monocular, Full-Color, Laser-Scanning, Helmet-Mounted Display for Aviator Situational Awareness

    Science.gov (United States)

    2017-03-27

    USAARL Report No. 2017-10 Modeling of a Monocular, Full-Color, Laser- Scanning, Helmet-Mounted Display for Aviator Situational Awareness By Thomas...Mounted Display for Aviator Situational Awareness N/A N/A N/A N/A N/A N/A Harding, Thomas H. Raatz, Maria E. Martin, John S. Rash, Clarence E. U.S...Huntsville, AL 35806-3302 PM Air Warrior, PEO Soldier Approved for public release; distribution unlimited. The modeling data and analysis presented in

  15. The influence of depth of focus on visibility of monocular head-mounted display symbology in simulation and training applications

    Science.gov (United States)

    Winterbottom, Marc D.; Patterson, Robert; Pierce, Byron J.; Covas, Christine; Winner, Jennifer

    2005-05-01

    The Joint Helmet Mounted Cueing System (JHMCS),is being considered for integration into the F-15, F-16, and F-18 aircraft. If this integration occurs, similar monocular head-mounted displays (HMDs) will need to be integrated with existing out-the-window simulator systems for training purposes. One such system is the Mobile Modular Display for Advanced Research and Training (M2DART), which is constructed with flat-panel rear-projection screens around a nominal eye-point. Because the panels are flat, the distance from the eye point to the display screen varies depending upon the location on the screen to which the observer is directing fixation. Variation in focal distance may create visibility problems for either the HMD symbology or the out-the-window imagery presented on the simulator rear-projection display screen because observers may not be able to focus both sets of images simultaneously. The extent to which blurring occurs will depend upon the difference between the focal planes of the simulator display and HMD as well as the depth of focus of the observer. In our psychophysical study, we investigated whether significant blurring occurs as a result of such differences in focal distances and established an optimal focal distance for an HMD which would minimize blurring for a range of focal distances representative of the M2DART. Our data suggest that blurring of symbology due to differing focal planes is not a significant issue within the range of distances tested and that the optimal focal distance for an HMD is the optical midpoint between the near and far rear-projection screen distances.

  16. Inexpensive Monocular Pico-Projector-based Augmented Reality Display for Surgical Microscope.

    Science.gov (United States)

    Shi, Chen; Becker, Brian C; Riviere, Cameron N

    2012-01-01

    This paper describes an inexpensive pico-projector-based augmented reality (AR) display for a surgical microscope. The system is designed for use with Micron, an active handheld surgical tool that cancels hand tremor of surgeons to improve microsurgical accuracy. Using the AR display, virtual cues can be injected into the microscope view to track the movement of the tip of Micron, show the desired position, and indicate the position error. Cues can be used to maintain high performance by helping the surgeon to avoid drifting out of the workspace of the instrument. Also, boundary information such as the view range of the cameras that record surgical procedures can be displayed to tell surgeons the operation area. Furthermore, numerical, textual, or graphical information can be displayed, showing such things as tool tip depth in the work space and on/off status of the canceling function of Micron.

  17. The effect of a monocular helmet-mounted display on aircrew health: a 10-year prospective cohort study of Apache AH MK 1 pilots: study midpoint update

    Science.gov (United States)

    Hiatt, Keith L.; Rash, Clarence E.; Watters, Raymond W.; Adams, Mark S.

    2009-05-01

    A collaborative occupational health study has been undertaken by Headquarters Army Aviation, Middle Wallop, UK, and the U.S. Army Aeromedical Research Laboratory, Fort Rucker, Alabama, to determine if the use of the Integrated Helmet and Display Sighting System (IHADSS) monocular helmet-mounted display (HMD) in the Apache AH Mk 1 attack helicopter has any long-term (10-year) effect on visual performance. The test methodology consists primarily of a detailed questionnaire and an annual battery of vision tests selected to capture changes in visual performance of Apache aviators over their flight career (with an emphasis on binocular visual function). Pilots using binocular night vision goggles serve as controls and undergo the same methodology. Currently, at the midpoint of the study, with the exception of a possible colour discrimination effect, there are no data indicating that the long-term use of the IHADSS monocular HMD results in negative effects on vision.

  18. Helmet-Mounted Displays (HMD)

    Data.gov (United States)

    Federal Laboratory Consortium — The Helmet-Mounted Display labis responsible for monocular HMD day display evaluations; monocular HMD night vision performance processes; binocular HMD day display...

  19. The effect of contrast on monocular versus binocular reading performance.

    Science.gov (United States)

    Johansson, Jan; Pansell, Tony; Ygge, Jan; Seimyr, Gustaf Öqvist

    2014-05-14

    The binocular advantage in reading performance is typically small. On the other hand research shows binocular reading to be remarkably robust to degraded stimulus properties. We hypothesized that this robustness may stem from an increasing binocular contribution. The main objective was to compare monocular and binocular performance at different stimulus contrasts and assess the level of binocular superiority. A secondary objective was to assess any asymmetry in performance related to ocular dominance. In a balanced repeated measures experiment 18 subjects read texts at three levels of contrast monocularly and binocularly while their eye movements were recorded. The binocular advantage increased with reduced contrast producing a 7% slower monocular reading at 40% contrast, 9% slower at 20% contrast, and 21% slower at 10% contrast. A statistically significant interaction effect was found in fixation duration displaying a more adverse effect in the monocular condition at lowest contrast. No significant effects of ocular dominance were observed. The outcome suggests that binocularity contributes increasingly to reading performance as stimulus contrast decreases. The strongest difference between monocular and binocular performance was due to fixation duration. The findings may pose a clinical point that it may be necessary to consider tests at different contrast levels when estimating reading performance. © 2014 ARVO.

  20. The Effect of a Monocular Helmet-Mounted Display on Aircrew Health: A Cohort Study of Apache AH Mk 1 Pilots Four-Year Review

    Science.gov (United States)

    2009-12-01

    conventional Snellen charts (Bailey and Lovie, 1976). This test was conducted monocularly for both left and right eyes using the habitual correction...from logMAR to Snellen acuity (20/xx) is accomplished using the formula to determine the Snellen denominator: xx = 20 x 10 logMAR. For the...last measurement cycle, values were available for all 23 control subjects. For the right eye, the mean visual acuity was 0.08 logMAR ( Snellen

  1. Infants' ability to respond to depth from the retinal size of human faces: comparing monocular and binocular preferential-looking.

    Science.gov (United States)

    Tsuruhara, Aki; Corrow, Sherryse; Kanazawa, So; Yamaguchi, Masami K; Yonas, Albert

    2014-11-01

    To examine sensitivity to pictorial depth cues in young infants (4 and 5 months-of-age), we compared monocular and binocular preferential looking to a display on which two faces were equidistantly presented and one was larger than the other, depicting depth from the size of human faces. Because human faces vary little in size, the correlation between retinal size and distance can provide depth information. As a result, adults perceive a larger face as closer than a smaller one. Although binocular information for depth provided information that the faces in our display were equidistant, under monocular viewing, no such information was provided. Rather, the size of the faces indicated that one was closer than the other. Infants are known to look longer at apparently closer objects. Therefore, we hypothesized that infants would look longer at a larger face in the monocular than in the binocular condition if they perceived depth from the size of human faces. Because the displays were identical in the two conditions, any difference in looking-behavior between monocular and binocular viewing indicated sensitivity to depth information. Results showed that 5-month-old infants preferred the larger, apparently closer, face in the monocular condition compared to the binocular condition when static displays were presented. In addition, when presented with a dynamic display, 4-month-old infants showed a stronger 'closer' preference in the monocular condition compared to the binocular condition. This was not the case when the faces were inverted. These results suggest that even 4-month-old infants respond to depth information from a depth cue that may require learning, the size of faces. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Infants’ ability to respond to depth from the retinal size of human faces: Comparing monocular and binocular preferential-looking

    Science.gov (United States)

    Tsuruhara, Aki; Corrow, Sherryse; Kanazawa, So; Yamaguchi, Masami K.; Yonas, Albert

    2014-01-01

    To examine sensitivity to pictorial depth cues in young infants (4 and 5 months-of-age), we compared monocular and binocular preferential looking to a display on which two faces were equidistantly presented and one was larger than the other, depicting depth from the size of human faces. Because human faces vary little in size, the correlation between retinal size and distance can provide depth information. As a result, adults perceive a larger face as closer than a smaller one. Although binocular information for depth provided information that the faces in our display were equidistant, under monocular viewing, no such information was provided. Rather, the size of the faces indicated that one was closer than the other. Infants are known to look longer at apparently closer objects. Therefore, we hypothesized that infants would look longer at a larger face in the monocular than in the binocular condition if they perceived depth from the size of human faces. Because the displays were identical in the two conditions, any difference in looking-behavior between monocular and binocular viewing indicated sensitivity to depth information. Results showed that 5-month-old infants preferred the larger, apparently closer, face in the monocular condition compared to the binocular condition when static displays were presented. In addition, when presented with a dynamic display, 4-month-old infants showed a stronger ‘closer’ preference in the monocular condition compared to the binocular condition. This was not the case when the faces were inverted. These results suggest that even 4-month-old infants respond to depth information from a depth cue that may require learning, the size of faces. PMID:25113916

  3. A Comparison of Monocular and Binocular Depth Perception in 5- and 7-Month-Old Infants.

    Science.gov (United States)

    Granrud, Carl E.; And Others

    1984-01-01

    Compares monocular depth perception with binocular depth perception in five- to seven-month-old infants. Reaching preferences (dependent measure) observed in the monocular condition indicated sensitivity to monocular depth information. Binocular viewing resulted in a far more consistent tendency to reach for the nearer object. (Author)

  4. Monocular transparency generates quantitative depth.

    Science.gov (United States)

    Howard, Ian P; Duke, Philip A

    2003-11-01

    Monocular zones adjacent to depth steps can create an impression of depth in the absence of binocular disparity. However, the magnitude of depth is not specified. We designed a stereogram that provides information about depth magnitude but which has no disparity. The effect depends on transparency rather than occlusion. For most subjects, depth magnitude produced by monocular transparency was similar to that created by a disparity-defined depth probe. Addition of disparity to monocular transparency did not improve the accuracy of depth settings. The magnitude of depth created by monocular occlusion fell short of that created by monocular transparency.

  5. The Influence of Monocular Spatial Cues on Vergence Eye Movements in Monocular and Binocular Viewing of 3-D and 2-D Stimuli.

    Science.gov (United States)

    Batvinionak, Anton A; Gracheva, Maria A; Bolshakov, Andrey S; Rozhkova, Galina I

    2015-01-01

    The influence of monocular spatial cues on the vergence eye movements was studied in two series of experiments: (I) the subjects were viewing a 3-D video and also its 2-D version-binocularly and monocularly; and (II) in binocular and monocular viewing conditions, the subjects were presented with stationary 2-D stimuli containing or not containing some monocular indications of spatial arrangement. The results of the series (I) showed that, in binocular viewing conditions, the vergence eye movements were only present in the case of 3-D but not 2-D video, while in the course of monocular viewing of 2-D video, some regular vergence eye movements could be revealed, suggesting that the occluded eye position could be influenced by the spatial organization of the scene reconstructed on the basis of the monocular depth information provided by the viewing eye. The data obtained in series (II), in general, seem to support this hypothesis. © The Author(s) 2015.

  6. Monocular depth effects on perceptual fading.

    Science.gov (United States)

    Hsu, Li-Chuan; Kramer, Peter; Yeh, Su-Ling

    2010-08-06

    After prolonged viewing, a static target among moving non-targets is perceived to repeatedly disappear and reappear. An uncrossed stereoscopic disparity of the target facilitates this Motion-Induced Blindness (MIB). Here we test whether monocular depth cues can affect MIB too, and whether they can also affect perceptual fading in static displays. Experiment 1 reveals an effect of interposition: more MIB when the target appears partially covered by, than when it appears to cover, its surroundings. Experiment 2 shows that the effect is indeed due to interposition and not to the target's contours. Experiment 3 induces depth with the watercolor illusion and replicates Experiment 1. Experiments 4 and 5 replicate Experiments 1 and 3 without the use of motion. Since almost any stimulus contains a monocular depth cue, we conclude that perceived depth affects perceptual fading in almost any stimulus, whether dynamic or static. Copyright 2010 Elsevier Ltd. All rights reserved.

  7. Amodal completion with background determines depth from monocular gap stereopsis.

    Science.gov (United States)

    Grove, Philip M; Ben Sachtler, W L; Gillam, Barbara J

    2006-10-01

    Grove, Gillam, and Ono [Grove, P. M., Gillam, B. J., & Ono, H. (2002). Content and context of monocular regions determine perceived depth in random dot, unpaired background and phantom stereograms. Vision Research, 42, 1859-1870] reported that perceived depth in monocular gap stereograms [Gillam, B. J., Blackburn, S., & Nakayama, K. (1999). Stereopsis based on monocular gaps: Metrical encoding of depth and slant without matching contours. Vision Research, 39, 493-502] was attenuated when the color/texture in the monocular gap did not match the background. It appears that continuation of the gap with the background constitutes an important component of the stimulus conditions that allow a monocular gap in an otherwise binocular surface to be responded to as a depth step. In this report we tested this view using the conventional monocular gap stimulus of two identical grey rectangles separated by a gap in one eye but abutting to form a solid grey rectangle in the other. We compared depth seen at the gap for this stimulus with stimuli that were identical except for two additional small black squares placed at the ends of the gap. If the squares were placed stereoscopically behind the rectangle/gap configuration (appearing on the background) they interfered with the perceived depth at the gap. However when they were placed in front of the configuration this attenuation disappeared. The gap and the background were able under these conditions to complete amodally.

  8. Separating monocular and binocular neural mechanisms mediating chromatic contextual interactions.

    Science.gov (United States)

    D'Antona, Anthony D; Christiansen, Jens H; Shevell, Steven K

    2014-04-17

    When seen in isolation, a light that varies in chromaticity over time is perceived to oscillate in color. Perception of that same time-varying light may be altered by a surrounding light that is also temporally varying in chromaticity. The neural mechanisms that mediate these contextual interactions are the focus of this article. Observers viewed a central test stimulus that varied in chromaticity over time within a larger surround that also varied in chromaticity at the same temporal frequency. Center and surround were presented either to the same eye (monocular condition) or to opposite eyes (dichoptic condition) at the same frequency (3.125, 6.25, or 9.375 Hz). Relative phase between center and surround modulation was varied. In both the monocular and dichoptic conditions, the perceived modulation depth of the central light depended on the relative phase of the surround. A simple model implementing a linear combination of center and surround modulation fit the measurements well. At the lowest temporal frequency (3.125 Hz), the surround's influence was virtually identical for monocular and dichoptic conditions, suggesting that at this frequency, the surround's influence is mediated primarily by a binocular neural mechanism. At higher frequencies, the surround's influence was greater for the monocular condition than for the dichoptic condition, and this difference increased with temporal frequency. Our findings show that two separate neural mechanisms mediate chromatic contextual interactions: one binocular and dominant at lower temporal frequencies and the other monocular and dominant at higher frequencies (6-10 Hz).

  9. Monocular visual ranging

    Science.gov (United States)

    Witus, Gary; Hunt, Shawn

    2008-04-01

    The vision system of a mobile robot for checkpoint and perimeter security inspection performs multiple functions: providing surveillance video, providing high resolution still images, and providing video for semi-autonomous visual navigation. Mid-priced commercial digital cameras support the primary inspection functions. Semi-autonomous visual navigation is a tertiary function whose purpose is to reduce the burden of teleoperation and free the security personnel for their primary functions. Approaches to robot visual navigation require some form of depth perception for speed control to prevent the robot from colliding with objects. In this paper present the initial results of an exploration of the capabilities and limitations of using a single monocular commercial digital camera for depth perception. Our approach combines complementary methods in alternating stationary and moving behaviors. When the platform is stationary, it computes a range image from differential blur in the image stack collected at multiple focus settings. When the robot is moving, it extracts an estimate of range from the camera auto-focus function, and combines this with an estimate derived from angular expansion of a constellation of visual tracking points.

  10. Bayesian depth estimation from monocular natural images.

    Science.gov (United States)

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2017-05-01

    Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.

  11. Measuring young infants' sensitivity to height-in-the-picture-plane by contrasting monocular and binocular preferential-looking.

    Science.gov (United States)

    Tsuruhara, Aki; Corrow, Sherryse; Kanazawa, So; Yamaguchi, Masami K; Yonas, Albert

    2014-01-01

    To examine young infants' sensitivity to a pictorial depth cue, we compared monocular and binocular preferential looking to objects of which depth was specified by height-in-the-picture-plane. For adults, this cue generates the perception that a lower object is closer than a higher object. This study showed that 4- and 5-month-old infants fixated the lower, apparently closer, figure more often under the monocular than binocular presentation providing evidence of their sensitivity to the pictorial depth cue. Because the displays were identical in the two conditions except for binocular information for depth, the difference in looking-behavior indicated sensitivity to depth information, excluding a possibility that they responded to 2D characteristics. This study also confirmed the usefulness of the method, preferential looking with a monocular and binocular comparison, to examine sensitivity to a pictorial depth cue in young infants, who are too immature to reach reliably for the closer of two objects. © 2013 Wiley Periodicals, Inc.

  12. Eyegaze Detection from Monocular Camera Image for Eyegaze Communication System

    Science.gov (United States)

    Ohtera, Ryo; Horiuchi, Takahiko; Kotera, Hiroaki

    An eyegaze interface is one of the key technologies as an input device in the ubiquitous-computing society. In particular, an eyegaze communication system is very important and useful for severely handicapped users such as quadriplegic patients. Most of the conventional eyegaze tracking algorithms require specific light sources, equipment and devices. In this study, a simple eyegaze detection algorithm is proposed using a single monocular video camera. The proposed algorithm works under the condition of fixed head pose, but slight movement of the face is accepted. In our system, we assume that all users have the same eyeball size based on physiological eyeball models. However, we succeed to calibrate the physiologic movement of the eyeball center depending on the gazing direction by approximating it as a change in the eyeball radius. In the gaze detection stage, the iris is extracted from a captured face frame by using the Hough transform. Then, the eyegaze angle is derived by calculating the Euclidean distance of the iris centers between the extracted frame and a reference frame captured in the calibration process. We apply our system to an eyegaze communication interface, and verified the performance through key typing experiments with a visual keyboard on display.

  13. Effects of Frame of Reference and Viewing Condition on Attentional Issues with Helmet Mounted Displays

    Science.gov (United States)

    1998-01-01

    dominant image may shift from eye to eye, so that the two monocular views will appear as alternating images ( Arditi , 1986; Davis, 1997). Thus, the... Arditi , A. (1986). Binocular vision. In K.R. Boff, L. Kaufman, and J.P. Thomas (Eds.), Handbook of Perception and Human Performance. Vol 1, New

  14. Effect of field of view and monocular viewing on angular size judgements in an outdoor scene

    Science.gov (United States)

    Denz, E. A.; Palmer, E. A.; Ellis, S. R.

    1980-01-01

    Observers typically overestimate the angular size of distant objects. Significantly, overestimations are greater in outdoor settings than in aircraft visual-scene simulators. The effect of field of view and monocular and binocular viewing conditions on angular size estimation in an outdoor field was examined. Subjects adjusted the size of a variable triangle to match the angular size of a standard triangle set at three greater distances. Goggles were used to vary the field of view from 11.5 deg to 90 deg for both monocular and binocular viewing. In addition, an unrestricted monocular and binocular viewing condition was used. It is concluded that neither restricted fields of view similar to those present in visual simulators nor the restriction of monocular viewing causes a significant loss in depth perception in outdoor settings. Thus, neither factor should significantly affect the depth realism of visual simulators.

  15. A Case of Functional (Psychogenic Monocular Hemianopia Analyzed by Measurement of Hemifield Visual Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Tsuyoshi Yoneda

    2013-12-01

    Full Text Available Purpose: Functional monocular hemianopia is an extremely rare condition, for which measurement of hemifield visual evoked potentials (VEPs has not been previously described. Methods: A 14-year-old boy with functional monocular hemianopia was followed up with Goldmann perimetry and measurement of hemifield and full-field VEPs. Results: The patient had a history of monocular temporal hemianopia of the right eye following headache, nausea and ague. There was no relative afferent pupillary defect, and a color perception test was normal. Goldmann perimetry revealed a vertical monocular temporal hemianopia of the right eye; the hemianopia on the right was also detected with a binocular visual field test. Computed tomography, magnetic resonance imaging (MRI and MR angiography of the brain including the optic chiasm as well as orbital MRI revealed no abnormalities. On the basis of these results, we diagnosed the patient's condition as functional monocular hemianopia. Pattern VEPs according to the International Society for Clinical Electrophysiology of Vision (ISCEV standard were within the normal range. The hemifield pattern VEPs for the right eye showed a symmetrical latency and amplitude for nasal and temporal hemifield stimulation. One month later, the visual field defect of the patient spontaneously disappeared. Conclusions: The latency and amplitude of hemifield VEPs for a patient with functional monocular hemianopia were normal. Measurement of hemifield VEPs may thus provide an objective tool for distinguishing functional hemianopia from hemifield loss caused by an organic lesion.

  16. Head Worn Display System for Equivalent Visual Operations

    Science.gov (United States)

    Cupero, Frank; Valimont, Brian; Wise, John; Best. Carl; DeMers, Bob

    2009-01-01

    Head-Worn Displays or so-called, near-to-eye displays have potentially significant advantages in terms of cost, overcoming cockpit space constraints, and for the display of spatially-integrated information. However, many technical issues need to be overcome before these technologies can be successfully introduced into commercial aircraft cockpits. The results of three activities are reported. First, the near-to-eye display design, technological, and human factors issues are described and a literature review is presented. Second, the results of a fixed-base piloted simulation, investigating the impact of near to eye displays on both operational and visual performance is reported. Straight-in approaches were flown in simulated visual and instrument conditions while using either a biocular or a monocular display placed on either the dominant or non-dominant eye. The pilot's flight performance, visual acuity, and ability to detect unsafe conditions on the runway were tested. The data generally supports a monocular design with minimal impact due to eye dominance. Finally, a method for head tracker system latency measurement is developed and used to compare two different devices.

  17. Validation of Data Association for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-01-01

    Full Text Available Simultaneous Mapping and Localization (SLAM is a multidisciplinary problem with ramifications within several fields. One of the key aspects for its popularity and success is the data fusion produced by SLAM techniques, providing strong and robust sensory systems even with simple devices, such as webcams in Monocular SLAM. This work studies a novel batch validation algorithm, the highest order hypothesis compatibility test (HOHCT, against one of the most popular approaches, the JCCB. The HOHCT approach has been developed as a way to improve performance of the delayed inverse-depth initialization monocular SLAM, a previously developed monocular SLAM algorithm based on parallax estimation. Both HOHCT and JCCB are extensively tested and compared within a delayed inverse-depth initialization monocular SLAM framework, showing the strengths and costs of this proposal.

  18. Monocular indoor localization techniques for smartphones

    Directory of Open Access Journals (Sweden)

    Hollósi Gergely

    2016-12-01

    Full Text Available In the last decade huge research work has been put to the indoor visual localization of personal smartphones. Considering the available sensor capabilities monocular odometry provides promising solution, even reecting requirements of augmented reality applications. This paper is aimed to give an overview of state-of-the-art results regarding monocular visual localization. For this purpose essential basics of computer vision are presented and the most promising solutions are reviewed.

  19. MONOCULAR AND BINOCULAR VISION IN THE PERFORMANCE OF A COMPLEX SKILL

    Directory of Open Access Journals (Sweden)

    Thomas Heinen

    2011-09-01

    Full Text Available The goal of this study was to investigate the role of binocular and monocular vision in 16 gymnasts as they perform a handspring on vault. In particular we reasoned, if binocular visual information is eliminated while experts and apprentices perform a handspring on vault, and their performance level changes or is maintained, then such information must or must not be necessary for their best performance. If the elimination of binocular vision leads to differences in gaze behavior in either experts or apprentices, this would answer the question of an adaptive gaze behavior, and thus if this is a function of expertise level or not. Gaze behavior was measured using a portable and wireless eye-tracking system in combination with a movement-analysis system. Results revealed that gaze behavior differed between experts and apprentices in the binocular and monocular conditions. In particular, apprentices showed less fixations of longer duration in the monocular condition as compared to experts and the binocular condition. Apprentices showed longer blink duration than experts in both, the monocular and binocular conditions. Eliminating binocular vision led to a shorter repulsion phase and a longer second flight phase in apprentices. Experts exhibited no differences in phase durations between binocular and monocular conditions. Findings suggest, that experts may not rely on binocular vision when performing handsprings, and movement performance maybe influenced in apprentices when eliminating binocular vision. We conclude that knowledge about gaze-movement relationships may be beneficial for coaches when teaching the handspring on vault in gymnastics

  20. fMRI investigation of monocular pattern rivalry.

    Science.gov (United States)

    Mendola, Janine D; Buckthought, Athena

    2013-01-01

    In monocular pattern rivalry, a composite image is shown to both eyes. The patient experiences perceptual alternations in which the two stimulus components alternate in clarity or salience. We used fMRI at 3T to image brain activity while participants perceived monocular rivalry passively or indicated their percepts with a task. The stimulus patterns were left/right oblique gratings, face/house composites, or a nonrivalrous control stimulus that did not support the perception of transparency or image segmentation. All stimuli were matched for luminance, contrast, and color. Compared with the control stimulus, the cortical activation for passive viewing of grating rivalry included dorsal and ventral extrastriate cortex, superior and inferior parietal regions, and multiple sites in frontal cortex. When the BOLD signal for the object rivalry task was compared with the grating rivalry task, a similar whole-brain network was engaged, but with significantly greater activity in extrastriate regions, including V3, V3A, fusiform face area (FFA), and parahippocampal place area (PPA). In addition, for the object rivalry task, FFA activity was significantly greater during face-dominant periods whereas parahippocampal place area activity was greater during house-dominant periods. Our results demonstrate that slight stimulus changes that trigger monocular rivalry recruit a large whole-brain network, as previously identified for other forms of bistability. Moreover, the results indicate that rivalry for complex object stimuli preferentially engages extrastriate cortex. We also establish that even with natural viewing conditions, endogenous attentional fluctuations in monocular pattern rivalry will differentially drive object-category-specific cortex, similar to binocular rivalry, but without complete suppression of the nondominant image.

  1. Monocular Video Guided Garment Simulation

    Institute of Scientific and Technical Information of China (English)

    Fa-Ming Li; Xiao-Wu Chen∗; Bin Zhou; Fei-Xiang Lu; Kan Guo; Qiang Fu

    2015-01-01

    We present a prototype to generate a garment-shape sequence guided by a monocular video sequence. It is a combination of a physically-based simulation and a boundary-based modification. Given a garment in the video worn on a mannequin, the simulation generates a garment initial shape by exploiting the mannequin shapes estimated from the video. The modification then deforms the simulated 3D shape into such a shape that matches the garment 2D boundary extracted from the video. According to the matching correspondences between the vertices on the shape and the points on the boundary, the modification is implemented by attracting the matched vertices and their neighboring vertices. For best-matching correspondences and efficient performance, three criteria are introduced to select the candidate vertices for matching. Since modifying each garment shape independently may cause inter-frame oscillations, changes by the modification are also propagated from one frame to the next frame. As a result, the generated garment 3D shape sequence is stable and similar to the garment video sequence. We demonstrate the effectiveness of our prototype with a number of examples.

  2. Ajax, XSLT and SVG: Displaying ATLAS conditions data with new web technologies

    CERN Document Server

    Roe, S A

    2010-01-01

    The combination of three relatively recent technologies is described which allows an easy path from database retrieval to interactive web display. SQL queries on an Oracle database can be performed in a manner which directly return an XML description of the result, and Ajax techniques (Asynchronous JavaScript And XML) are used to dynamically inject the data into a web display accompanied by an XSLT transform template which determines how the data will be formatted. By tuning the transform to generate SVG (Scalable Vector Graphics) a direct graphical representation can be produced in the web page while retaining the database data as the XML source, allowing dynamic links to be generated in the web representation, but programmatic use of the data when used from a user application. With the release of the SVG 1.2 Tiny draft specification, the display can also be tailored for display on mobile devices. The technologies are described and a sample application demonstrated, showing conditions data from the ATLAS Sem...

  3. P2-1: Visual Short-Term Memory Lacks Sensitivity to Stereoscopic Depth Changes but is Much Sensitive to Monocular Depth Changes

    Directory of Open Access Journals (Sweden)

    Hae-In Kang

    2012-10-01

    Full Text Available Depth from both binocular disparity and monocular depth cues presumably is one of most salient features that would characterize a variety of visual objects in our daily life. Therefore it is plausible to expect that human vision should be good at perceiving objects' depth change arising from binocular disparities and monocular pictorial cues. However, what if the estimated depth needs to be remembered in visual short-term memory (VSTM rather than just perceived? In a series of experiments, we asked participants to remember depth of items in an array at the beginning of each trial. A set of test items followed after the memory array, and the participants were asked to report if one of the items in the test array have changed its depth from the remembered items or not. The items would differ from each other in three different depth conditions: (1 stereoscopic depth under binocular disparity manipulations, (2 monocular depth under pictorial cue manipulations, and (3 both stereoscopic and monocular depth. The accuracy of detecting depth change was substantially higher in the monocular condition than in the binocular condition, and the accuracy in the both-depth condition was moderately improved compared to the monocular condition. These results indicate that VSTM benefits more from monocular depth than stereoscopic depth, and further suggests that storage of depth information into VSTM would require both binocular and monocular information for its optimal memory performance.

  4. Differences in displayed pump flow compared to measured flow under varying conditions during simulated cardiopulmonary bypass.

    LENUS (Irish Health Repository)

    Hargrove, M

    2008-07-01

    Errors in blood flow delivery due to shunting have been reported to reduce flow by, potentially, up to 40-83% during cardiopulmonary bypass. The standard roller-pump measures revolutions per minute and a calibration factor for different tubing sizes calculates and displays flow accordingly. We compared displayed roller-pump flow with ultrasonically measured flow to ascertain if measured flow correlated with the heart-lung pump flow reading. Comparison of flows was measured under varying conditions of pump run duration, temperature, viscosity, varying arterial\\/venous loops, occlusiveness, outlet pressure, use of silicone or polyvinyl chloride (PVC) in the roller race, different tubing diameters, and use of a venous vacuum-drainage device.

  5. The Conditional Sink: Counterfactual Display in the Valuation of a Carbon Offsetting Reforestation Project

    Directory of Open Access Journals (Sweden)

    Véra Ehrenstein

    2013-11-01

    Full Text Available This paper examines counterfactual display in the valuation of carbon offsetting projects. Considered a legitimate way to encourage climate change mitigation, such projects rely on the establishment of procedures for the prospective assessment of their capacity to become carbon sinks. This requires imagining possible worlds and assessing their plausibility. The world inhabited by the project is articulated through conditional formulation and subjected to what we call "counterfactual display": the production and circulation of documents that demonstrate and configure the counterfactual valuation. We present a case study on one carbon offsetting reforestation project in the Democratic Republic of Congo. We analyse the construction of the scene that allows the "What would have happened" question to make sense and become actionable. We highlight the operations of calculative framing that this requires, the reality constraints it relies upon, and the entrepreneurial conduct it stimulates.

  6. Monocular zones in stereoscopic scenes: A useful source of information for human binocular vision?

    Science.gov (United States)

    Harris, Julie M.

    2010-02-01

    When an object is closer to an observer than the background, the small differences between right and left eye views are interpreted by the human brain as depth. This basic ability of the human visual system, called stereopsis, lies at the core of all binocular three-dimensional (3-D) perception and related technological display development. To achieve stereopsis, it is traditionally assumed that corresponding locations in the right and left eye's views must first be matched, then the relative differences between right and left eye locations are used to calculate depth. But this is not the whole story. At every object-background boundary, there are regions of the background that only one eye can see because, in the other eye's view, the foreground object occludes that region of background. Such monocular zones do not have a corresponding match in the other eye's view and can thus cause problems for depth extraction algorithms. In this paper I will discuss evidence, from our knowledge of human visual perception, illustrating that monocular zones do not pose problems for our human visual systems, rather, our visual systems can extract depth from such zones. I review the relevant human perception literature in this area, and show some recent data aimed at quantifying the perception of depth from monocular zones. The paper finishes with a discussion of the potential importance of considering monocular zones, for stereo display technology and depth compression algorithms.

  7. Hazard detection with a monocular bioptic telescope.

    Science.gov (United States)

    Doherty, Amy L; Peli, Eli; Luo, Gang

    2015-09-01

    The safety of bioptic telescopes for driving remains controversial. The ring scotoma, an area to the telescope eye due to the telescope magnification, has been the main cause of concern. This study evaluates whether bioptic users can use the fellow eye to detect in hazards driving videos that fall in the ring scotoma area. Twelve visually impaired bioptic users watched a series of driving hazard perception training videos and responded as soon as they detected a hazard while reading aloud letters presented on the screen. The letters were placed such that when reading them through the telescope the hazard fell in the ring scotoma area. Four conditions were tested: no bioptic and no reading, reading without bioptic, reading with a bioptic that did not occlude the fellow eye (non-occluding bioptic), and reading with a bioptic that partially-occluded the fellow eye. Eight normally sighted subjects performed the same task with the partially occluding bioptic detecting lateral hazards (blocked by the device scotoma) and vertical hazards (outside the scotoma) to further determine the cause-and-effect relationship between hazard detection and the fellow eye. There were significant differences in performance between conditions: 83% of hazards were detected with no reading task, dropping to 67% in the reading task with no bioptic, to 50% while reading with the non-occluding bioptic, and 34% while reading with the partially occluding bioptic. For normally sighted, detection of vertical hazards (53%) was significantly higher than lateral hazards (38%) with the partially occluding bioptic. Detection of driving hazards is impaired by the addition of a secondary reading like task. Detection is further impaired when reading through a monocular telescope. The effect of the partially-occluding bioptic supports the role of the non-occluded fellow eye in compensating for the ring scotoma. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  8. Monocular Blindness: Is It a Handicap?

    Science.gov (United States)

    Knoth, Sharon

    1995-01-01

    Students with monocular vision may be in need of special assistance and should be evaluated by a multidisciplinary team to determine whether the visual loss is affecting educational performance. This article discusses the student's eligibility for special services, difficulty in performing depth perception tasks, difficulties in specific classroom…

  9. Disparity biasing in depth from monocular occlusions.

    Science.gov (United States)

    Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S

    2011-07-15

    Monocular occlusions have been shown to play an important role in stereopsis. Among other contributions to binocular depth perception, monocular occlusions can create percepts of illusory occluding surfaces. It has been argued that the precise location in depth of these illusory occluders is based on the constraints imposed by occlusion geometry. Tsirlin et al. (2010) proposed that when these constraints are weak, the depth of the illusory occluder can be biased by a neighboring disparity-defined feature. In the present work we test this hypothesis using a variety of stimuli. We show that when monocular occlusions provide only partial constraints on the magnitude of depth of the illusory occluders, the perceived depth of the occluders can be biased by disparity-defined features in the direction unrestricted by the occlusion geometry. Using this disparity bias phenomenon we also show that in illusory occluder stimuli where disparity information is present, but weak, most observers rely on disparity while some use occlusion information instead to specify the depth of the illusory occluder. Taken together our experiments demonstrate that in binocular depth perception disparity and monocular occlusion cues interact in complex ways to resolve perceptual ambiguity. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, J.J.; Albertazzi, L.; Doorn, A.J. van; Ee, R. van; Grind, W.A. van de; Kappers, A.M.L.; Lappin, J.S.; Norman, J.F.; Oomes, A.H.J.; Pas, S.F. te; Phillips, F.; Pont, S.C.; Richards, W.A.; Todd, J.T.; Verstraten, F.A.J.; Vries, S.C. de

    2010-01-01

    The issue of the existence of planes—understood as the carriers of a nexus of straight lines—in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  11. Full optical characterization of autostereoscopic 3D displays using local viewing angle and imaging measurements

    Science.gov (United States)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    Two commercial auto-stereoscopic 3D displays are characterized a using Fourier optics viewing angle system and an imaging video-luminance-meter. One display has a fixed emissive configuration and the other adapts its emission to the observer position using head tracking. For a fixed emissive condition, three viewing angle measurements are performed at three positions (center, right and left). Qualified monocular and binocular viewing spaces in front of the display are deduced as well as the best working distance. The imaging system is then positioned at this working distance and crosstalk homogeneity on the entire surface of the display is measured. We show that the crosstalk is generally not optimized on all the surface of the display. Display aspect simulation using viewing angle measurements allows understanding better the origin of those crosstalk variations. Local imperfections like scratches and marks generally increase drastically the crosstalk, demonstrating that cleanliness requirements for this type of display are quite critical.

  12. Monocular Road Detection Using Structured Random Forest

    Directory of Open Access Journals (Sweden)

    Liang Xiao

    2016-05-01

    Full Text Available Road detection is a key task for autonomous land vehicles. Monocular vision-based road detection algorithms are mostly based on machine learning approaches and are usually cast as classification problems. However, the pixel-wise classifiers are faced with the ambiguity caused by changes in road appearance, illumination and weather. An effective way to reduce the ambiguity is to model the contextual information with structured learning and prediction. Currently, the widely used structured prediction model in road detection is the Markov random field or conditional random field. However, the random field-based methods require additional complex optimization after pixel-wise classification, making them unsuitable for real-time applications. In this paper, we present a structured random forest-based road-detection algorithm which is capable of modelling the contextual information efficiently. By mapping the structured label space to a discrete label space, the test function of each split node can be trained in a similar way to that of the classical random forests. Structured random forests make use of the contextual information of image patches as well as the structural information of the labels to get more consistent results. Besides this benefit, by predicting a batch of pixels in a single classification, the structured random forest-based road detection can be much more efficient than the conventional pixel-wise random forest. Experimental results tested on the KITTI-ROAD dataset and data collected in typical unstructured environments show that structured random forest-based road detection outperforms the classical pixel-wise random forest both in accuracy and efficiency.

  13. Recovery of neurofilament following early monocular deprivation

    Directory of Open Access Journals (Sweden)

    Timothy P O'Leary

    2012-04-01

    Full Text Available A brief period of monocular deprivation in early postnatal life can alter the structure of neurons within deprived-eye-receiving layers of the dorsal lateral geniculate nucleus. The modification of structure is accompanied by a marked reduction in labeling for neurofilament, a protein that composes the stable cytoskeleton and that supports neuron structure. This study examined the extent of neurofilament recovery in monocularly deprived cats that either had their deprived eye opened (binocular recovery, or had the deprivation reversed to the fellow eye (reverse occlusion. The degree to which recovery was dependent on visually-driven activity was examined by placing monocularly deprived animals in complete darkness (dark rearing. The loss of neurofilament and the reduction of soma size caused by monocular deprivation were both ameliorated equally following either binocular recovery or reverse occlusion for 8 days. Though monocularly deprived animals placed in complete darkness showed recovery of soma size, there was a generalized loss of neurofilament labeling that extended to originally non-deprived layers. Overall, these results indicate that recovery of soma size is achieved by removal of the competitive disadvantage of the deprived eye, and occurred even in the absence of visually-driven activity. Recovery of neurofilament occurred when the competitive disadvantage of the deprived eye was removed, but unlike the recovery of soma size, was dependent upon visually-driven activity. The role of neurofilament in providing stable neural structure raises the intriguing possibility that dark rearing, which reduced overall neurofilament levels, could be used to reset the deprived visual system so as to make it more ameliorable with treatment by experiential manipulations.

  14. Effect of Acid Dissolution Conditions on Recovery of Valuable Metals from Used Plasma Display Panel Scrap

    Directory of Open Access Journals (Sweden)

    Kim Chan-Mi

    2017-06-01

    Full Text Available The objective of this particular study was to recover valuable metals from waste plasma display panels using high energy ball milling with subsequent acid dissolution. Dissolution of milled (PDP powder was studied in HCl, HNO3, and H2SO4 acidic solutions. The effects of dissolution acid, temperature, time, and PDP scrap powder to acid ratio on the leaching process were investigated and the most favorable conditions were found: (1 valuable metals (In, Ag, Mg were recovered from PDP powder in a mixture of concentrated hydrochloric acid (HCl:H2O = 50:50; (2 the optimal dissolution temperature and time for the valuable metals were found to be 60°C and 30 min, respectively; (3 the ideal PDP scrap powder to acid solution ratio was found to be 1:10. The proposed method was applied to the recovery of magnesium, silver, and indium with satisfactory results.

  15. Differential processing of binocular and monocular gloss cues in human visual cortex

    Science.gov (United States)

    Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.

    2016-01-01

    The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596

  16. Differential processing of binocular and monocular gloss cues in human visual cortex.

    Science.gov (United States)

    Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E

    2016-06-01

    The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.

  17. Monocular and binocular depth discrimination thresholds.

    Science.gov (United States)

    Kaye, S B; Siddiqui, A; Ward, A; Noonan, C; Fisher, A C; Green, J R; Brown, M C; Wareing, P A; Watt, P

    1999-11-01

    Measurement of stereoacuity at varying distances, by real or simulated depth stereoacuity tests, is helpful in the evaluation of patients with binocular imbalance or strabismus. Although the cue of binocular disparity underpins stereoacuity tests, there may be variable amounts of other binocular and monocular cues inherent in a stereoacuity test. In such circumstances, a combined monocular and binocular threshold of depth discrimination may be measured--stereoacuity conventionally referring to the situation where binocular disparity giving rise to retinal disparity is the only cue present. A child-friendly variable distance stereoacuity test (VDS) was developed, with a method for determining the binocular depth threshold from the combined monocular and binocular threshold of depth of discrimination (CT). Subjects with normal binocular function, reduced binocular function, and apparently absent binocularity were included. To measure the threshold of depth discrimination, subjects were required by means of a hand control to align two electronically controlled spheres at viewing distances of 1, 3, and 6m. Stereoacuity was also measured using the TNO, Frisby, and Titmus stereoacuity tests. BTs were calculated according to the function BT= arctan (1/tan alphaC - 1/tan alphaM)(-1), where alphaC and alphaM are the angles subtended at the nodal points by objects situated at the monocular threshold (alphaM) and the combined monocular-binocular threshold (alphaC) of discrimination. In subjects with good binocularity, BTs were similar to their combined thresholds, whereas subjects with reduced and apparently absent binocularity had binocular thresholds 4 and 10 times higher than their combined thresholds (CT). The VDS binocular thresholds showed significantly higher correlation and agreement with the TNO test and the binocular thresholds of the Frisby and Titmus tests, than the corresponding combined thresholds (p = 0.0019). The VDS was found to be an easy to use real depth

  18. Full parallax multifocus three-dimensional display using a slanted light source array

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Eun-Hee; Kim, Dong-Wook

    2011-11-01

    A new multifocus three-dimensional display which gives full parallax monocular depth cue and omni-directional focus is developed with the least parallax images. The key factor of this display system is a slanted array of light-emitting diode light source, not a horizontal array. In this system, defocus effects are experimentally achieved and the monocular focus effect is tested by four parallax images and even two parallax images. The full parallax multifocus three-dimensional display is more applicable to monocular or binocular augmented reality three-dimensional display in the modification to a see-through type.

  19. Quantitative perceived depth from sequential monocular decamouflage.

    Science.gov (United States)

    Brooks, K R; Gillam, B J

    2006-03-01

    We present a novel binocular stimulus without conventional disparity cues whose presence and depth are revealed by sequential monocular stimulation (delay > or = 80 ms). Vertical white lines were occluded as they passed behind an otherwise camouflaged black rectangular target. The location (and instant) of the occlusion event, decamouflaging the target's edges, differed in the two eyes. Probe settings to match the depth of the black rectangular target showed a monotonic increase with simulated depth. Control tests discounted the possibility of subjects integrating retinal disparities over an extended temporal window or using temporal disparity. Sequential monocular decamouflage was found to be as precise and accurate as conventional simultaneous stereopsis with equivalent depths and exposure durations.

  20. Monocular Vision SLAM for Indoor Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Koray Çelik

    2013-01-01

    Full Text Available This paper presents a novel indoor navigation and ranging strategy via monocular camera. By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM. The navigation strategy assumes an indoor or indoor-like manmade environment whose layout is previously unknown, GPS-denied, representable via energy based feature points, and straight architectural lines. We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV with sophisticated on-board image processing and SLAM capabilities. Building and enabling such a small aerial vehicle to fly in tight corridors is a significant technological challenge, especially in the absence of GPS signals and with limited sensing options. Experimental results show that the system is only limited by the capabilities of the camera and environmental entropy.

  1. Outdoor autonomous navigation using monocular vision

    OpenAIRE

    Royer, Eric; Bom, Jonathan; Dhome, Michel; Thuilot, Benoît; Lhuillier, Maxime; Marmoiton, Francois

    2005-01-01

    International audience; In this paper, a complete system for outdoor robot navigation is presented. It uses only monocular vision. The robot is first guided on a path by a human. During this learning step, the robot records a video sequence. From this sequence, a three dimensional map of the trajectory and the environment is built. When this map has been computed, the robot is able to follow the same trajectory by itself. Experimental results carried out with an urban electric vehicle are sho...

  2. Binocular and Monocular Depth Cues in Online Feedback Control of 3-D Pointing Movement

    Science.gov (United States)

    Hu, Bo; Knill, David C.

    2012-01-01

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and thus were available in an observer’s retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account of the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions. PMID:21724567

  3. Binocular and monocular depth cues in online feedback control of 3D pointing movement.

    Science.gov (United States)

    Hu, Bo; Knill, David C

    2011-06-30

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and, thus, were available in an observer's retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size, and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions.

  4. Monocular alignment in different depth planes.

    Science.gov (United States)

    Shimono, Koichi; Wade, Nicholas J

    2002-04-01

    We examined (a) whether vertical lines at different physical horizontal positions in the same eye can appear to be aligned, and (b), if so, whether the difference between the horizontal positions of the aligned vertical lines can vary with the perceived depth between them. In two experiments, each of two vertical monocular lines was presented (in its respective rectangular area) in one field of a random-dot stereopair with binocular disparity. In Experiment 1, 15 observers were asked to align a line in an upper area with a line in a lower area. The results indicated that when the lines appeared aligned, their horizontal physical positions could differ and the direction of the difference coincided with the type of disparity of the rectangular areas; this is not consistent with the law of the visual direction of monocular stimuli. In Experiment 2, 11 observers were asked to report relative depth between the two lines and to align them. The results indicated that the difference of the horizontal position did not covary with their perceived relative depth, suggesting that the visual direction and perceived depth of the monocular line are mediated via different mechanisms.

  5. Visual SLAM for Handheld Monocular Endoscope.

    Science.gov (United States)

    Grasa, Óscar G; Bernal, Ernesto; Casado, Santiago; Gil, Ismael; Montiel, J M M

    2014-01-01

    Simultaneous localization and mapping (SLAM) methods provide real-time estimation of 3-D models from the sole input of a handheld camera, routinely in mobile robotics scenarios. Medical endoscopic sequences mimic a robotic scenario in which a handheld camera (monocular endoscope) moves along an unknown trajectory while observing an unknown cavity. However, the feasibility and accuracy of SLAM methods have not been extensively validated with human in vivo image sequences. In this work, we propose a monocular visual SLAM algorithm tailored to deal with medical image sequences in order to provide an up-to-scale 3-D map of the observed cavity and the endoscope trajectory at frame rate. The algorithm is validated over synthetic data and human in vivo sequences corresponding to 15 laparoscopic hernioplasties where accurate ground-truth distances are available. It can be concluded that the proposed procedure is: 1) noninvasive, because only a standard monocular endoscope and a surgical tool are used; 2) convenient, because only a hand-controlled exploratory motion is needed; 3) fast, because the algorithm provides the 3-D map and the trajectory in real time; 4) accurate, because it has been validated with respect to ground-truth; and 5) robust to inter-patient variability, because it has performed successfully over the validation sequences.

  6. Meibomian gland dysfunction determines the severity of the dry eye conditions in visual display terminal workers.

    Directory of Open Access Journals (Sweden)

    Huping Wu

    Full Text Available OBJECTIVE: To explore meibomian gland dysfunction (MGD may determine the severity of dry eye conditions in visual display terminal (VDT workers. METHODOLOGY: Prospective, case-control study carried out in China.106 eyes of 53 patients (VDT work time >4 hour per day were recruited as the Long time VDT group; 80 eyes of 40 control subjects (VDT work time ≤ 4 hour per day served as the Short time VDT group. A questionnaire of Ocular Surface Disease Index (OSDI and multiple tests were performed. Three dry eye tests: tear film breakup time (BUT, corneal fluorescein staining, Schirmer I test; and three MGD parameters: lid margin abnormality score, meibum expression assessment (meibum score, and meibomian gland dropout degree (meiboscore using Keratograph 5 M. PRINCIPAL FINDINGS: OSDI and corneal fluorescein score were significantly higher while BUT was dramatically shorter in the long time VDT group than the short time VDT group. However, the average of Schirmer tear volumes was in normal ranges in both groups. Interestingly, the three MGD parameters were significantly higher in the long time VDT group than the short time one (P<0.0001. When 52 eyes with Schirmer <10 mm and 54 eyes with Schirmer ≥ 10 mm were separated from the long time VDT workers, no significant differences were found between the two subgroups in OSDI, fluorescein staining and BUT, as well as the three MGD parameters. All three MGD parameters were positively correlated with VDT working time (P<0.0001 and fluorescein scores (P<0.0001, inversely correlated with BUT (P<0.05, but not correlated with Schirmer tear volumes in the VDT workers. CONCLUSIONS: Our findings suggest that a malfunction of meibomian glands is associated with dry eye patients in long term VDT workers with higher OSDI scores whereas some of those patients presenting a normal tear volume.

  7. Monocular Visual Deprivation Suppresses Excitability in Adult Human Visual Cortex

    DEFF Research Database (Denmark)

    Lou, Astrid Rosenstand; Madsen, Kristoffer Hougaard; Paulson, Olaf Bjarne

    2011-01-01

    The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... employed TMS to trace plastic changes in adult visual cortex before, during, and after 48 h of monocular deprivation (MD) of the right dominant eye. In healthy adult volunteers, MD-induced changes in visual cortex excitability were probed with paired-pulse TMS applied to the left and right occipital cortex....... Stimulus–response curves were constructed by recording the intensity of the reported phosphenes evoked in the contralateral visual field at range of TMS intensities. Phosphene measurements revealed that MD produced a rapid and robust decrease in cortical excitability relative to a control condition without...

  8. Human skeleton proportions from monocular data

    Institute of Scientific and Technical Information of China (English)

    PENG En; LI Ling

    2006-01-01

    This paper introduces a novel method for estimating the skeleton proportions ofa human figure from monocular data.The proposed system will first automatically extract the key frames and recover the perspective camera model from the 2D data.The human skeleton proportions are then estimated from the key frames using the recovered camera model without posture reconstruction. The proposed method is tested to be simple, fast and produce satisfactory results for the input data. The human model with estimated proportions can be used in future research involving human body modeling or human motion reconstruction.

  9. Perception of Spatial Features with Stereoscopic Displays.

    Science.gov (United States)

    1980-10-24

    aniseikonia (differences in retinal image size in the two eyes) are of little significance because only monocular perception of the display is required for...perception as a result of such factors as aniseikonia , uncor- rected refractive errors, or phorias results in reduced stereopsis. However, because

  10. Development of three types of multifocus 3D display

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong Wook

    2011-06-01

    Three types of multi-focus(MF) 3D display are developed and possibility about monocular depth cue is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving multi-focus function, we developed 3D display system for each eye, which can satisfy accommodation to displayed virtual objects within defined depth. The first MF 3D display is developed via laser scanning method, the second MF 3D display uses LED array for light source, and the third MF 3D display uses slated LED array for full parallax monocular depth cue. The full parallax MF 3D display system gives omnidirectional focus effect. The proposed 3D display systems have a possibility of solving eye fatigue problem that comes from the mismatch between the accommodation of each eye and the convergence of two eyes. The monocular accommodation is tested and a proof of the satisfaction of the full parallax accommodation is given as a result of the proposed full parallax MF 3D display system. We achieved a result that omni-directional focus adjustment is possible via parallax images.

  11. Reversible monocular cataract simulating amaurosis fugax.

    Science.gov (United States)

    Paylor, R R; Selhorst, J B; Weinberg, R S

    1985-07-01

    In a patient having brittle, juvenile-onset diabetes, transient monocular visual loss occurred repeatedly whenever there were wide fluctuations in serum glucose. Amaurosis fugax was suspected. The visual loss differed, however, in that it persisted over a period of hours to several days. Direct observation eventually revealed that the relatively sudden change in vision of one eye was associated with opacification of the lens and was not accompanied by an afferent pupillary defect. Presumably, a hyperosmotic gradient had developed with the accumulation of glucose and sorbitol within the lens. Water was drawn inward, altering the composition of the lens fibers and thereby lowering the refractive index, forming a reversible cataract. Hypoglycemia is also hypothesized to have played a role in the formation of a higher osmotic gradient. The unilaterality of the cataract is attributed to variation in the permeability of asymmetric posterior subcapsular cataracts.

  12. Effect of monocular deprivation on rabbit neural retinal cell densities

    Directory of Open Access Journals (Sweden)

    Philip Maseghe Mwachaka

    2015-01-01

    Conclusion: In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  13. Generalized nematohydrodynamic boundary conditions with application to bistable twisted nematic liquid-crystal displays

    KAUST Repository

    Fang, Angbo

    2008-12-08

    Parallel to the highly successful Ericksen-Leslie hydrodynamic theory for the bulk behavior of nematic liquid crystals (NLCs), we derive a set of coupled hydrodynamic boundary conditions to describe the NLC dynamics near NLC-solid interfaces. In our boundary conditions, translational flux (flow slippage) and rotational flux (surface director relaxation) are coupled according to the Onsager variational principle of least energy dissipation. The application of our boundary conditions to the truly bistable π -twist NLC cell reveals a complete picture of the dynamic switching processes. It is found that the thus far overlooked translation-rotation dissipative coupling at solid surfaces can accelerate surface director relaxation and enhance the flow rate. This can be utilized to improve the performance of electro-optical nematic devices by lowering the required switching voltages and reducing the switching times. © 2008 The American Physical Society.

  14. Judgments of the distance to nearby virtual objects: interaction of viewing conditions and accommodative demand.

    Science.gov (United States)

    Ellis, S R; Menges, B M

    1997-08-01

    Ten subjects adjusted a real-object probe to match the distance of nearby virtual objects optically presented via a see-through, helmet-mounted display. Monocular, binocular, and stereoscopic viewing conditions were used with two levels of required focus. Observed errors may be related to changes in the subjects' binocular convergence. The results suggest ways in which virtual objects may be presented with improved spatial fidelity.

  15. Localization of monocular stimuli in different depth planes.

    Science.gov (United States)

    Shimono, Koichi; Tam, Wa James; Asakura, Nobuhiko; Ohmi, Masao

    2005-09-01

    We examined the phenomenon in which two physically aligned monocular stimuli appear to be non-collinear when each of them is located in binocular regions that are at different depth planes. Using monocular bars embedded in binocular random-dot areas that are at different depths, we manipulated properties of the binocular areas and examined their effect on the perceived direction and depth of the monocular stimuli. Results showed that (1) the relative visual direction and perceived depth of the monocular bars depended on the binocular disparity and the dot density of the binocular areas, and (2) the visual direction, but not the depth, depended on the width of the binocular regions. These results are consistent with the hypothesis that monocular stimuli are treated by the visual system as binocular stimuli that have acquired the properties of their binocular surrounds. Moreover, partial correlation analysis suggests that the visual system utilizes both the disparity information of the binocular areas and the perceived depth of the monocular bars in determining the relative visual direction of the bars.

  16. The Hsp60 protein of helicobacter pylori displays chaperone activity under acidic conditions

    Directory of Open Access Journals (Sweden)

    Jose A. Mendoza

    2017-03-01

    Full Text Available The heat shock protein, Hsp60, is one of the most abundant proteins in Helicobacter pylori. Given its sequence homology to the Escherichia coli Hsp60 or GroEL, Hsp60 from H. pylori would be expected to function as a molecular chaperone in this organism. H. pylori is an organism that grows on the gastric epithelium, where the pH can fluctuate between neutral and 4.5 and the intracellular pH can be as low as 5.0. This study was performed to test the ability of Hsp60 from H. pylori to function as a molecular chaperone under mildly acidic conditions. We report here that Hsp60 could suppress the acid-induced aggregation of alcohol dehydrogenase (ADH in the 7.0–5.0 pH range. Hsp60 was found to undergo a conformational change within this pH range. It was also found that exposure of hydrophobic surfaces of Hsp60 is significant and that their exposure is increased under acidic conditions. Although, alcohol dehydrogenase does not contain exposed hydrophobic surfaces, we found that their exposure is triggered at low pH. Our results demonstrate that Hsp60 from H. pylori can function as a molecular chaperone under acidic conditions and that the interaction between Hsp60 and other proteins may be mediated by hydrophobic interactions.

  17. Monocular blur alters the tuning characteristics of stereopsis for spatial frequency and size.

    Science.gov (United States)

    Li, Roger W; So, Kayee; Wu, Thomas H; Craven, Ashley P; Tran, Truyet T; Gustafson, Kevin M; Levi, Dennis M

    2016-09-01

    Our sense of depth perception is mediated by spatial filters at different scales in the visual brain; low spatial frequency channels provide the basis for coarse stereopsis, whereas high spatial frequency channels provide for fine stereopsis. It is well established that monocular blurring of vision results in decreased stereoacuity. However, previous studies have used tests that are broadband in their spatial frequency content. It is not yet entirely clear how the processing of stereopsis in different spatial frequency channels is altered in response to binocular input imbalance. Here, we applied a new stereoacuity test based on narrow-band Gabor stimuli. By manipulating the carrier spatial frequency, we were able to reveal the spatial frequency tuning of stereopsis, spanning from coarse to fine, under blurred conditions. Our findings show that increasing monocular blur elevates stereoacuity thresholds 'selectively' at high spatial frequencies, gradually shifting the optimum frequency to lower spatial frequencies. Surprisingly, stereopsis for low frequency targets was only mildly affected even with an acuity difference of eight lines on a standard letter chart. Furthermore, we examined the effect of monocular blur on the size tuning function of stereopsis. The clinical implications of these findings are discussed.

  18. Short-term monocular patching boosts the patched eye’s response in visual cortex

    Science.gov (United States)

    Zhou, Jiawei; Baker, Daniel H.; Simard, Mathieu; Saint-Amour, Dave; Hess, Robert F.

    2015-01-01

    Abstract Purpose: Several recent studies have demonstrated that following short-term monocular deprivation in normal adults, the patched eye, rather than the unpatched eye, becomes stronger in subsequent binocular viewing. However, little is known about the site and nature of the underlying processes. In this study, we examine the underlying mechanisms by measuring steady-state visual evoked potentials (SSVEPs) as an index of the neural contrast response in early visual areas. Methods: The experiment consisted of three consecutive stages: a pre-patching EEG recording (14 minutes), a monocular patching stage (2.5 hours) and a post-patching EEG recording (14 minutes; started immediately after the removal of the patch). During the patching stage, a diffuser (transmits light but not pattern) was placed in front of one randomly selected eye. During the EEG recording stage, contrast response functions for each eye were measured. Results: The neural responses from the patched eye increased after the removal of the patch, whilst the responses from the unpatched eye remained the same. Such phenomena occurred under both monocular and dichoptic viewing conditions. Conclusions: We interpret this eye dominance plasticity in adult human visual cortex as homeostatic intrinsic plasticity regulated by an increase of contrast-gain in the patched eye. PMID:26410580

  19. Conditionally reprogrammed normal and transformed mouse mammary epithelial cells display a progenitor-cell-like phenotype.

    Directory of Open Access Journals (Sweden)

    Francisco R Saenz

    Full Text Available Mammary epithelial (ME cells cultured under conventional conditions senesce after several passages. Here, we demonstrate that mouse ME cells isolated from normal mammary glands or from mouse mammary tumor virus (MMTV-Neu-induced mammary tumors, can be cultured indefinitely as conditionally reprogrammed cells (CRCs on irradiated fibroblasts in the presence of the Rho kinase inhibitor Y-27632. Cell surface progenitor-associated markers are rapidly induced in normal mouse ME-CRCs relative to ME cells. However, the expression of certain mammary progenitor subpopulations, such as CD49f+ ESA+ CD44+, drops significantly in later passages. Nevertheless, mouse ME-CRCs grown in a three-dimensional extracellular matrix gave rise to mammary acinar structures. ME-CRCs isolated from MMTV-Neu transgenic mouse mammary tumors express high levels of HER2/neu, as well as tumor-initiating cell markers, such as CD44+, CD49f+, and ESA+ (EpCam. These patterns of expression are sustained in later CRC passages. Early and late passage ME-CRCs from MMTV-Neu tumors that were implanted in the mammary fat pads of syngeneic or nude mice developed vascular tumors that metastasized within 6 weeks of transplantation. Importantly, the histopathology of these tumors was indistinguishable from that of the parental tumors that develop in the MMTV-Neu mice. Application of the CRC system to mouse mammary epithelial cells provides an attractive model system to study the genetics and phenotype of normal and transformed mouse epithelium in a defined culture environment and in vivo transplant studies.

  20. Urolithins display both antioxidant and pro-oxidant activities depending on assay system and conditions.

    Science.gov (United States)

    Kallio, Tuija; Kallio, Johanna; Jaakkola, Mari; Mäki, Marianne; Kilpeläinen, Pekka; Virtanen, Vesa

    2013-11-13

    The biological effects of polyphenolic ellagitannins are mediated by their intestinal metabolites, urolithins. This study investigated redox properties of urolithins A and B using ORAC assay, three cell-based assays, copper-initiated pro-oxidant activity (CIPA) assay, and cyclic voltammetry. Urolithins were strong antioxidants in the ORAC assay, but mostly pro-oxidants in cell-based assays, although urolithin A was an antioxidant in cell culture medium. Parent compound ellagic acid was a strong extracellular antioxidant, but showed no response in the intracellular assay. The CIPA assay confirmed the pro-oxidant activity of ellagitannin metabolites. In the cell proliferation assay, urolithins but not ellagic acid decreased growth and metabolism of HepG2 liver cells. In cyclic voltammetry, the oxidation of urolithin A was partly reversible, but that of urolithin B was irreversible. These results illustrate how strongly measured redox properties depend on the employed assay system and conditions and emphasize the importance of studying pro-oxidant and antioxidant activities in parallel.

  1. Three-dimensional holographic display using active shutter for head mounted display application

    Science.gov (United States)

    Kim, Hyun-Eui; Kim, Nam; Song, Hoon; Lee, Hong-Seok; Park, Jae-Hyeung

    2011-03-01

    Three-dimensional holographic system using active shutters for head mounted display application is proposed. Conventional three-dimensional head mounted display suffers from eye-fatigue since it only provides binocular disparity, not monocular depth cues like accommodation. The proposed method presents two holograms of a 3D scene to corresponding eyes using active shutters. Since a holography delivered to each eye has full three-dimensional information, not only the binocular depth cues but also monocular depth cues are presented, eliminating eye-fatigue. The application to the head mounted display also greatly relaxes the viewing angle requirement that is one of the main issues of the conventional holographic displays. In presentation, the proposed optical system will be explained in detail with experimental results.

  2. 3D environment capture from monocular video and inertial data

    Science.gov (United States)

    Clark, R. Robert; Lin, Michael H.; Taylor, Colin J.

    2006-02-01

    This paper presents experimental methods and results for 3D environment reconstruction from monocular video augmented with inertial data. One application targets sparsely furnished room interiors, using high quality handheld video with a normal field of view, and linear accelerations and angular velocities from an attached inertial measurement unit. A second application targets natural terrain with manmade structures, using heavily compressed aerial video with a narrow field of view, and position and orientation data from the aircraft navigation system. In both applications, the translational and rotational offsets between the camera and inertial reference frames are initially unknown, and only a small fraction of the scene is visible in any one video frame. We start by estimating sparse structure and motion from 2D feature tracks using a Kalman filter and/or repeated, partial bundle adjustments requiring bounded time per video frame. The first application additionally incorporates a weak assumption of bounding perpendicular planes to minimize a tendency of the motion estimation to drift, while the second application requires tight integration of the navigational data to alleviate the poor conditioning caused by the narrow field of view. This is followed by dense structure recovery via graph-cut-based multi-view stereo, meshing, and optional mesh simplification. Finally, input images are texture-mapped onto the 3D surface for rendering. We show sample results from multiple, novel viewpoints.

  3. Analysis Of Traffic Conditions Based On The Percentage Of Drivers Using The Instructions Displayed On VMS Boards

    Directory of Open Access Journals (Sweden)

    Leszek Smolarek

    2015-09-01

    Full Text Available The theme of the publication is to show the influence of human factor on traffic conditions during the traffic incident. The publication also depicts the functionality of the model at which the simulation was performed. The model was constructed in the VISSIM and VISUM software also using Visual Basic for Applications – Excel, [8,9]. By coordinating programs VBA and VISSIM was automated turned on or off the incident as well as turned on or off the VMS with information about the proposed of the alternative route. The additional differentiation of the percentage of drivers using the information displayed enabled to compare the data with identical external conditions influencing at traffic. For statistical analysis of data was used statistical program Statgraphics Centurion which made possible to build a model describing the impact of the behavior of drivers on traffic conditions. It is an innovative approach to modeling the impact on traffic conditions accepted by drivers information transmitted on the boards.

  4. Ernst Mach and the episode of the monocular depth sensations.

    Science.gov (United States)

    Banks, E C

    2001-01-01

    Although Ernst Mach is widely recognized in psychology for his discovery of the effects of lateral inhibition in the retina ("Mach Bands"), his contributions to the theory of depth perception are not as well known. Mach proposed that steady luminance gradients triggered sensations of depth. He also expanded on Ewald Hering's hypothesis of "monocular depth sensations," arguing that they were subject to the same principle of lateral inhibition as light sensations were. Even after Hermann von Helmholtz's attack on Hering in 1866, Mach continued to develop theories involving the monocular depth sensations, proposing an explanation of perspective drawings in which the mutually inhibiting depth sensations scaled to a mean depth. Mach also contemplated a theory of stereopsis in which monocular depth perception played the primary role. Copyright 2001 John Wiley & Sons, Inc.

  5. Brief monocular deprivation as an assay of short-term visual sensory plasticity in schizophrenia – the binocular effect.

    Directory of Open Access Journals (Sweden)

    John J Foxe

    2013-12-01

    Full Text Available Background: Visual sensory processing deficits are consistently observed in schizophrenia, with clear amplitude reduction of the visual evoked potential (VEP during the initial 50-150 milliseconds of processing. Similar deficits are seen in unaffected first-degree relatives and drug-naïve first-episode patients, pointing to these deficits as potential endophenotypic markers. Schizophrenia is also associated with deficits in neural plasticity, implicating dysfunction of both glutamatergic and gabaergic systems. Here, we sought to understand the intersection of these two domains, asking whether short-term plasticity during early visual processing is specifically affected in schizophrenia. Methods: Brief periods of monocular deprivation induce relatively rapid changes in the amplitude of the early VEP – i.e. short-term plasticity. Twenty patients and twenty non-psychiatric controls participated. VEPs were recorded during binocular viewing, and were compared to the sum of VEP responses during brief monocular viewing periods (i.e. Left-eye + Right-eye viewing. Results: Under monocular conditions, neurotypical controls exhibited an effect that patients failed to demonstrate. That is, the amplitude of the summed monocular VEPs was robustly greater than the amplitude elicited binocularly during the initial sensory processing period. In patients, this binocular effect was absent. Limitations: Patients were all medicated. Ideally, this study would also include first-episode unmedicated patients.Conclusions: These results suggest that short-term compensatory mechanisms that allow healthy individuals to generate robust VEPs in the context of monocular deprivation are not effectively activated in patients with schizophrenia. This simple assay may provide a useful biomarker of short-term plasticity in the psychotic disorders and a target endophenotype for therapeutic interventions.

  6. Higher resolution stimulus facilitates depth perception: MT+ plays a significant role in monocular depth perception.

    Science.gov (United States)

    Tsushima, Yoshiaki; Komine, Kazuteru; Sawahata, Yasuhito; Hiruma, Nobuyuki

    2014-10-20

    Today, we human beings are facing with high-quality virtual world of a completely new nature. For example, we have a digital display consisting of a high enough resolution that we cannot distinguish from the real world. However, little is known how such high-quality representation contributes to the sense of realness, especially to depth perception. What is the neural mechanism of processing such fine but virtual representation? Here, we psychophysically and physiologically examined the relationship between stimulus resolution and depth perception, with using luminance-contrast (shading) as a monocular depth cue. As a result, we found that a higher resolution stimulus facilitates depth perception even when the stimulus resolution difference is undetectable. This finding is against the traditional cognitive hierarchy of visual information processing that visual input is processed continuously in a bottom-up cascade of cortical regions that analyze increasingly complex information such as depth information. In addition, functional magnetic resonance imaging (fMRI) results reveal that the human middle temporal (MT+) plays a significant role in monocular depth perception. These results might provide us with not only the new insight of our neural mechanism of depth perception but also the future progress of our neural system accompanied by state-of- the-art technologies.

  7. Monocular tool control, eye dominance, and laterality in New Caledonian crows.

    Science.gov (United States)

    Martinho, Antone; Burns, Zackory T; von Bayern, Auguste M P; Kacelnik, Alex

    2014-12-15

    Tool use, though rare, is taxonomically widespread, but morphological adaptations for tool use are virtually unknown. We focus on the New Caledonian crow (NCC, Corvus moneduloides), which displays some of the most innovative tool-related behavior among nonhumans. One of their major food sources is larvae extracted from burrows with sticks held diagonally in the bill, oriented with individual, but not species-wide, laterality. Among possible behavioral and anatomical adaptations for tool use, NCCs possess unusually wide binocular visual fields (up to 60°), suggesting that extreme binocular vision may facilitate tool use. Here, we establish that during natural extractions, tool tips can only be viewed by the contralateral eye. Thus, maintaining binocular view of tool tips is unlikely to have selected for wide binocular fields; the selective factor is more likely to have been to allow each eye to see far enough across the midsagittal line to view the tool's tip monocularly. Consequently, we tested the hypothesis that tool side preference follows eye preference and found that eye dominance does predict tool laterality across individuals. This contrasts with humans' species-wide motor laterality and uncorrelated motor-visual laterality, possibly because bill-held tools are viewed monocularly and move in concert with eyes, whereas hand-held tools are visible to both eyes and allow independent combinations of eye preference and handedness. This difference may affect other models of coordination between vision and mechanical control, not necessarily involving tools. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Estimating 3D positions and velocities of projectiles from monocular views.

    Science.gov (United States)

    Ribnick, Evan; Atev, Stefan; Papanikolopoulos, Nikolaos P

    2009-05-01

    In this paper, we consider the problem of localizing a projectile in 3D based on its apparent motion in a stationary monocular view. A thorough theoretical analysis is developed, from which we establish the minimum conditions for the existence of a unique solution. The theoretical results obtained have important implications for applications involving projectile motion. A robust, nonlinear optimization-based formulation is proposed, and the use of a local optimization method is justified by detailed examination of the local convexity structure of the cost function. The potential of this approach is validated by experimental results.

  9. A trajectory and orientation reconstruction method for moving objects based on a moving monocular camera.

    Science.gov (United States)

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-03-09

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional "point intersection" to "trajectory intersection" in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable.

  10. Disambiguation of Necker cube rotation by monocular and binocular depth cues: relative effectiveness for establishing long-term bias.

    Science.gov (United States)

    Harrison, Sarah J; Backus, Benjamin T; Jain, Anshul

    2011-05-11

    The apparent direction of rotation of perceptually bistable wire-frame (Necker) cubes can be conditioned to depend on retinal location by interleaving their presentation with cubes that are disambiguated by depth cues (Haijiang, Saunders, Stone, & Backus, 2006; Harrison & Backus, 2010a). The long-term nature of the learned bias is demonstrated by resistance to counter-conditioning on a consecutive day. In previous work, either binocular disparity and occlusion, or a combination of monocular depth cues that included occlusion, internal occlusion, haze, and depth-from-shading, were used to control the rotation direction of disambiguated cubes. Here, we test the relative effectiveness of these two sets of depth cues in establishing the retinal location bias. Both cue sets were highly effective in establishing a perceptual bias on Day 1 as measured by the perceived rotation direction of ambiguous cubes. The effect of counter-conditioning on Day 2, on perceptual outcome for ambiguous cubes, was independent of whether the cue set was the same or different as Day 1. This invariance suggests that a common neural population instantiates the bias for rotation direction, regardless of the cue set used. However, in a further experiment where only disambiguated cubes were presented on Day 1, perceptual outcome of ambiguous cubes during Day 2 counter-conditioning showed that the monocular-only cue set was in fact more effective than disparity-plus-occlusion for causing long-term learning of the bias. These results can be reconciled if the conditioning effect of Day 1 ambiguous trials in the first experiment is taken into account (Harrison & Backus, 2010b). We suggest that monocular disambiguation leads to stronger bias either because it more strongly activates a single neural population that is necessary for perceiving rotation, or because ambiguous stimuli engage cortical areas that are also engaged by monocularly disambiguated stimuli but not by disparity-disambiguated stimuli

  11. Perception of 3D spatial relations for 3D displays

    Science.gov (United States)

    Rosen, Paul; Pizlo, Zygmunt; Hoffmann, Christoph; Popescu, Voicu S.

    2004-05-01

    We test perception of 3D spatial relations in 3D images rendered by a 3D display (Perspecta from Actuality Systems) and compare it to that of a high-resolution flat panel display. 3D images provide the observer with such depth cues as motion parallax and binocular disparity. Our 3D display is a device that renders a 3D image by displaying, in rapid succession, radial slices through the scene on a rotating screen. The image is contained in a glass globe and can be viewed from virtually any direction. In the psychophysical experiment several families of 3D objects are used as stimuli: primitive shapes (cylinders and cuboids), and complex objects (multi-story buildings, cars, and pieces of furniture). Each object has at least one plane of symmetry. On each trial an object or its "distorted" version is shown at an arbitrary orientation. The distortion is produced by stretching an object in a random direction by 40%. This distortion must eliminate the symmetry of an object. The subject's task is to decide whether or not the presented object is distorted under several viewing conditions (monocular/binocular, with/without motion parallax, and near/far). The subject's performance is measured by the discriminability d', which is a conventional dependent variable in signal detection experiments.

  12. Short-Term Monocular Deprivation Enhances Physiological Pupillary Oscillations.

    Science.gov (United States)

    Binda, Paola; Lunghi, Claudia

    2017-01-01

    Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark) and task requirements (minimizing body and gaze movements), slow pupil oscillations, "hippus," spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry). This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure) provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

  13. Parallax error in the monocular head-mounted eye trackers

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Witzner Hansen, Dan

    2012-01-01

    This paper investigates the parallax error, which is a common problem of many video-based monocular mobile gaze trackers. The parallax error is defined and described using the epipolar geometry in a stereo camera setup. The main parameters that change the error are introduced and it is shown how...

  14. Monocular SLAM for Autonomous Robots with Enhanced Features Initialization

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2014-04-01

    Full Text Available This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM, a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

  15. Monocular SLAM for autonomous robots with enhanced features initialization.

    Science.gov (United States)

    Guerra, Edmundo; Munguia, Rodrigo; Grau, Antoni

    2014-04-02

    This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI) framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM), a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

  16. Short-Term Monocular Deprivation Enhances Physiological Pupillary Oscillations

    Directory of Open Access Journals (Sweden)

    Paola Binda

    2017-01-01

    Full Text Available Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark and task requirements (minimizing body and gaze movements, slow pupil oscillations, “hippus,” spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry. This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

  17. Stereo improves 3D shape discrimination even when rich monocular shape cues are available.

    Science.gov (United States)

    Lee, Young Lim; Saunders, Jeffrey A

    2011-08-17

    We measured the ability to discriminate 3D shapes across changes in viewpoint and illumination based on rich monocular 3D information and tested whether the addition of stereo information improves shape constancy. Stimuli were images of smoothly curved, random 3D objects. Objects were presented in three viewing conditions that provided different 3D information: shading-only, stereo-only, and combined shading and stereo. Observers performed shape discrimination judgments for sequentially presented objects that differed in orientation by rotation of 0°-60° in depth. We found that rotation in depth markedly impaired discrimination performance in all viewing conditions, as evidenced by reduced sensitivity (d') and increased bias toward judging same shapes as different. We also observed a consistent benefit from stereo, both in conditions with and without change in viewpoint. Results were similar for objects with purely Lambertian reflectance and shiny objects with a large specular component. Our results demonstrate that shape perception for random 3D objects is highly viewpoint-dependent and that stereo improves shape discrimination even when rich monocular shape cues are available.

  18. Monocular and binocular edges enhance the perception of stereoscopic slant.

    Science.gov (United States)

    Wardle, Susan G; Palmisano, Stephen; Gillam, Barbara J

    2014-07-01

    Gradients of absolute binocular disparity across a slanted surface are often considered the basis for stereoscopic slant perception. However, perceived stereo slant around a vertical axis is usually slow and significantly under-estimated for isolated surfaces. Perceived slant is enhanced when surrounding surfaces provide a relative disparity gradient or depth step at the edges of the slanted surface, and also in the presence of monocular occlusion regions (sidebands). Here we investigate how different kinds of depth information at surface edges enhance stereo slant about a vertical axis. In Experiment 1, perceived slant decreased with increasing surface width, suggesting that the relative disparity between the left and right edges was used to judge slant. Adding monocular sidebands increased perceived slant for all surface widths. In Experiment 2, observers matched the slant of surfaces that were isolated or had a context of either monocular or binocular sidebands in the frontal plane. Both types of sidebands significantly increased perceived slant, but the effect was greater with binocular sidebands. These results were replicated in a second paradigm in which observers matched the depth of two probe dots positioned in front of slanted surfaces (Experiment 3). A large bias occurred for the surface without sidebands, yet this bias was reduced when monocular sidebands were present, and was nearly eliminated with binocular sidebands. Our results provide evidence for the importance of edges in stereo slant perception, and show that depth from monocular occlusion geometry and binocular disparity may interact to resolve complex 3D scenes. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    Science.gov (United States)

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.

  20. Eliminating accommodation-convergence conflicts in stereoscopic displays: Can multiple-focal-plane displays elicit continuous and consistent vergence and accommodation responses?

    Science.gov (United States)

    MacKenzie, Kevin J.; Watt, Simon J.

    2010-02-01

    Conventional stereoscopic displays present images at a fixed focal distance. Depth variations in the depicted scene therefore result in conflicts between the stimuli to vergence and to accommodation. The resulting decoupling of accommodation and vergence responses can cause adverse consequences, including reduced stereo performance, difficulty fusing binocular images, and fatigue and discomfort. These problems could be eliminated if stereo displays could present correct focus cues. A promising approach to achieving this is to present each eye with a sum of images presented at multiple focal planes, and to approximate continuous variations in focal distance by distributing light energy across image planes - a technique referred to as depth-filtering1. Here we describe a novel multi-plane display in which we can measure accommodation and vergence responses. We report an experiment in which we compare these oculomotor responses to real stimuli and depth-filtered simulations of the same distance. Vergence responses were generally similar across conditions. Accommodation responses to depth-filtered images were inaccurate, however, showing an overshoot of the target, particularly in response to a small step-change in stimulus distance. This is surprising because we have previously shown that blur-driven accommodation to the same stimuli, viewed monocularly, is accurate and reliable. We speculate that an initial convergence-driven accommodation response, in combination with a weaker accommodative stimulus from depth-filtered images, leads to this overshoot. Our results suggest that stereoscopic multi-plane displays can be effective, but require smaller image-plane separations than monocular accommodation responses suggest.

  1. Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues.

    Science.gov (United States)

    Warren, Paul A; Rushton, Simon K

    2009-05-01

    We have recently suggested that the brain uses its sensitivity to optic flow in order to parse retinal motion into components arising due to self and object movement (e.g. Rushton, S. K., & Warren, P. A. (2005). Moving observers, 3D relative motion and the detection of object movement. Current Biology, 15, R542-R543). Here, we explore whether stereo disparity is necessary for flow parsing or whether other sources of depth information, which could theoretically constrain flow-field interpretation, are sufficient. Stationary observers viewed large field of view stimuli containing textured cubes, moving in a manner that was consistent with a complex observer movement through a stationary scene. Observers made speeded responses to report the perceived direction of movement of a probe object presented at different depths in the scene. Across conditions we varied the presence or absence of different binocular and monocular cues to depth order. In line with previous studies, results consistent with flow parsing (in terms of both perceived direction and response time) were found in the condition in which motion parallax and stereoscopic disparity were present. Observers were poorer at judging object movement when depth order was specified by parallax alone. However, as more monocular depth cues were added to the stimulus the results approached those found when the scene contained stereoscopic cues. We conclude that both monocular and binocular static depth information contribute to flow parsing. These findings are discussed in the context of potential architectures for a model of the flow parsing mechanism.

  2. An Analytical Measuring Rectification Algorithm of Monocular Systems in Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Deshi Li

    2016-01-01

    Full Text Available Range estimation is crucial for maintaining a safe distance, in particular for vision navigation and localization. Monocular autonomous vehicles are appropriate for outdoor environment due to their mobility and operability. However, accurate range estimation using vision system is challenging because of the nonholonomic dynamics and susceptibility of vehicles. In this paper, a measuring rectification algorithm for range estimation under shaking conditions is designed. The proposed method focuses on how to estimate range using monocular vision when a shake occurs and the algorithm only requires the pose variations of the camera to be acquired. Simultaneously, it solves the problem of how to assimilate results from different kinds of sensors. To eliminate measuring errors by shakes, we establish a pose-range variation model. Afterwards, the algebraic relation between distance increment and a camera’s poses variation is formulated. The pose variations are presented in the form of roll, pitch, and yaw angle changes to evaluate the pixel coordinate incensement. To demonstrate the superiority of our proposed algorithm, the approach is validated in a laboratory environment using Pioneer 3-DX robots. The experimental results demonstrate that the proposed approach improves in the range accuracy significantly.

  3. Disseminated neurocysticercosis presenting as isolated acute monocular painless vision loss

    Directory of Open Access Journals (Sweden)

    Gaurav M Kasundra

    2014-01-01

    Full Text Available Neurocysticercosis, the most common parasitic infection of the nervous system, is known to affect the brain, eyes, muscular tissues and subcutaneous tissues. However, it is very rare for patients with ocular cysts to have concomitant cerebral cysts. Also, the dominant clinical manifestation of patients with cerebral cysts is either seizures or headache. We report a patient who presented with acute monocular painless vision loss due to intraocular submacular cysticercosis, who on investigation had multiple cerebral parenchymal cysticercal cysts, but never had any seizures. Although such a vision loss after initiation of antiparasitic treatment has been mentioned previously, acute monocular vision loss as the presenting feature of ocular cysticercosis is rare. We present a brief review of literature along with this case report.

  4. The effect of induced monocular blur on measures of stereoacuity.

    Science.gov (United States)

    Odell, Naomi V; Hatt, Sarah R; Leske, David A; Adams, Wendy E; Holmes, Jonathan M

    2009-04-01

    To determine the effect of induced monocular blur on stereoacuity measured with real depth and random dot tests. Monocular visual acuity deficits (range, 20/15 to 20/1600) were induced with 7 different Bangerter filters (depth tests and Preschool Randot (PSR) and Distance Randot (DR) random dot tests. Stereoacuity results were grouped as either "fine" (60 and 200 arcsec to nil) stereo. Across visual acuity deficits, stereoacuity was more severely degraded with random dot (PSR, DR) than with real depth (Frisby, FD2) tests. Degradation to worse-than-fine stereoacuity consistently occurred at 0.7 logMAR (20/100) or worse for Frisby, 0.1 logMAR (20/25) or worse for PSR, and 0.1 logMAR (20/25) or worse for FD2. There was no meaningful threshold for the DR because worse-than-fine stereoacuity was associated with -0.1 logMAR (20/15). Course/nil stereoacuity was consistently associated with 1.2 logMAR (20/320) or worse for Frisby, 0.8 logMAR (20/125) or worse for PSR, 1.1 logMAR (20/250) or worse for FD2, and 0.5 logMAR (20/63) or worse for DR. Stereoacuity thresholds are more easily degraded by reduced monocular visual acuity with the use of random dot tests (PSR and DR) than real depth tests (Frisby and FD2). We have defined levels of monocular visual acuity degradation associated with fine and nil stereoacuity. These findings have important implications for testing stereoacuity in clinical populations.

  5. A smart telerobotic system driven by monocular vision

    Science.gov (United States)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  6. Building a 3D scanner system based on monocular vision.

    Science.gov (United States)

    Zhang, Zhiyi; Yuan, Lin

    2012-04-10

    This paper proposes a three-dimensional scanner system, which is built by using an ingenious geometric construction method based on monocular vision. The system is simple, low cost, and easy to use, and the measurement results are very precise. To build it, one web camera, one handheld linear laser, and one background calibration board are required. The experimental results show that the system is robust and effective, and the scanning precision can be satisfied for normal users.

  7. Monocular nasal hemianopia from atypical sphenoid wing meningioma.

    Science.gov (United States)

    Stacy, Rebecca C; Jakobiec, Frederick A; Lessell, Simmons; Cestari, Dean M

    2010-06-01

    Neurogenic monocular nasal field defects respecting the vertical midline are quite uncommon. We report a case of a unilateral nasal hemianopia that was caused by compression of the left optic nerve by a sphenoid wing meningioma. Histological examination revealed that the pathology of the meningioma was consistent with that of an atypical meningioma, which carries a guarded prognosis with increased chance of recurrence. The tumor was debulked surgically, and the patient's visual field defect improved.

  8. Indoor monocular mobile robot navigation based on color landmarks

    Institute of Scientific and Technical Information of China (English)

    LUO Yuan; ZHANG Bai-sheng; ZHANG Yi; LI Ling

    2009-01-01

    A robot landmark navigation system based on monocular camera was researched theoretically and experimentally. First the landmark setting and its data structure in programming was given; then the coordinates of them getting by robot and global localization of the robot was described; finally experiments based on Pioneer III mobile robot show that this system can work well at different topographic situation without lose of signposts.

  9. Altered anterior visual system development following early monocular enucleation

    Directory of Open Access Journals (Sweden)

    Krista R. Kelly

    2014-01-01

    Conclusions: The novel finding of an asymmetry in morphology of the anterior visual system following long-term survival from early monocular enucleation indicates altered postnatal visual development. Possible mechanisms behind this altered development include recruitment of deafferented cells by crossing nasal fibres and/or geniculate cell retention via feedback from primary visual cortex. These data highlight the importance of balanced binocular input during postnatal maturation for typical anterior visual system morphology.

  10. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse- Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  11. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse-Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  12. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    Science.gov (United States)

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.

  13. Proximity Compatibility and Information Display: The Effects of Space and Color on the Analysis of Aircraft Stall Conditions

    Science.gov (United States)

    1989-10-01

    is displayed in close prc xiinity (Polson, Wickens, Klapp , & Colle, 1989). Conversely, focused attention performance is best when the information to be...Journal of E-pt i mental Psychology: I-.uman Percc’ ion and Performance 9(3), 380-393. Polst, :, M , Wick1 1is, C. . , Klapp , S. T. , & cal 1, Ii

  14. Stochastically optimized monocular vision-based navigation and guidance

    Science.gov (United States)

    Watanabe, Yoko

    -effort guidance (MEG) law for multiple target tracking is applied for a guidance design to achieve the mission. Through simulations, it is shown that the control effort can be reduced by using the MEG-based guidance design instead of a conventional proportional navigation-based one. The navigation and guidance designs are implemented and evaluated in a 6 DoF UAV flight simulation. Furthermore, the vision-based obstacle avoidance system is also tested in a flight test using a balloon as an obstacle. For monocular vision-based control problems, it is well-known that the separation principle between estimation and control does not hold. In other words, that vision-based estimation performance highly depends on the relative motion of the vehicle with respect to the target. Therefore, this thesis aims to derive an optimal guidance law to achieve a given mission under the condition of using the EKF-based relative navigation. Unlike many other works on observer trajectory optimization, this thesis suggests a stochastically optimized guidance design that minimizes the expected value of a cost function of the guidance error and the control effort subject to the EKF prediction and update procedures. A suboptimal guidance law is derived based on an idea of the one-step-ahead (OSA) optimization, in which the optimization is performed under the assumption that there will be only one more final measurement at the one time step ahead. The OSA suboptimal guidance law is applied to problems of vision-based rendezvous and vision-based obstacle avoidance. Simulation results are presented to show that the suggested guidance law significantly improves the guidance performance. The OSA suboptimal optimization approach is generalized as the n-step-ahead (nSA) optimization for an arbitrary number of n. Furthermore, the nSA suboptimal guidance law is extended to the p %-ahead suboptimal guidance by changing the value of n at each time step depending on the current time. The nSA (including the OSA) and

  15. A method of real-time detection for distant moving obstacles by monocular vision

    Science.gov (United States)

    Jia, Bao-zhi; Zhu, Ming

    2013-12-01

    In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.

  16. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    Science.gov (United States)

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  17. Short-term monocular deprivation strengthens the patched eye's contribution to binocular combination.

    Science.gov (United States)

    Zhou, Jiawei; Clavagnier, Simon; Hess, Robert F

    2013-04-18

    Binocularity is a fundamental property of primate vision. Ocular dominance describes the perceptual weight given to the inputs from the two eyes in their binocular combination. There is a distribution of sensory dominance within the normal binocular population with most subjects having balanced inputs while some are dominated by the left eye and some by the right eye. Using short-term monocular deprivation, the sensory dominance can be modulated as, under these conditions, the patched eye's contribution is strengthened. We address two questions: Is this strengthening a general effect such that it is seen for different types of sensory processing? And is the strengthening specific to pattern deprivation, or does it also occur for light deprivation? Our results show that the strengthening effect is a general finding involving a number of sensory functions, and it occurs as a result of both pattern and light deprivation.

  18. Decrease in monocular sleep after sleep deprivation in the domestic chicken

    NARCIS (Netherlands)

    Boerema, AS; Riedstra, B; Strijkstra, AM

    2003-01-01

    We investigated the trade-off between sleep need and alertness, by challenging chickens to modify their monocular sleep. We sleep deprived domestic chickens (Gallus domesticus) to increase their sleep need. We found that in response to sleep deprivation the fraction of monocular sleep within sleep

  19. Decrease in monocular sleep after sleep deprivation in the domestic chicken

    NARCIS (Netherlands)

    Boerema, AS; Riedstra, B; Strijkstra, AM

    2003-01-01

    We investigated the trade-off between sleep need and alertness, by challenging chickens to modify their monocular sleep. We sleep deprived domestic chickens (Gallus domesticus) to increase their sleep need. We found that in response to sleep deprivation the fraction of monocular sleep within sleep d

  20. Deformable Surface 3D Reconstruction from Monocular Images

    CERN Document Server

    Salzmann, Matthieu

    2010-01-01

    Being able to recover the shape of 3D deformable surfaces from a single video stream would make it possible to field reconstruction systems that run on widely available hardware without requiring specialized devices. However, because many different 3D shapes can have virtually the same projection, such monocular shape recovery is inherently ambiguous. In this survey, we will review the two main classes of techniques that have proved most effective so far: The template-based methods that rely on establishing correspondences with a reference image in which the shape is already known, and non-rig

  1. Automatic gear sorting system based on monocular vision

    Directory of Open Access Journals (Sweden)

    Wenqi Wu

    2015-11-01

    Full Text Available An automatic gear sorting system based on monocular vision is proposed in this paper. A CCD camera fixed on the top of the sorting system is used to obtain the images of the gears on the conveyor belt. The gears׳ features including number of holes, number of teeth and color are extracted, which is used to categorize the gears. Photoelectric sensors are used to locate the gears׳ position and produce the trigger signals for pneumatic cylinders. The automatic gear sorting is achieved by using pneumatic actuators to push different gears into their corresponding storage boxes. The experimental results verify the validity and reliability of the proposed method and system.

  2. Monocular occlusions determine the perceived shape and depth of occluding surfaces.

    Science.gov (United States)

    Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S

    2010-06-01

    Recent experiments have established that monocular areas arising due to occlusion of one object by another contribute to stereoscopic depth perception. It has been suggested that the primary role of monocular occlusions is to define depth discontinuities and object boundaries in depth. Here we use a carefully designed stimulus to demonstrate empirically that monocular occlusions play an important role in localizing depth edges and defining the shape of the occluding surfaces in depth. We show that the depth perceived via occlusion in our stimuli is not due to the presence of binocular disparity at the boundary and discuss the quantitative nature of depth perception in our stimuli. Our data suggest that the visual system can use monocular information to estimate not only the sign of the depth of the occluding surface but also its magnitude. We also provide preliminary evidence that perceived depth of illusory occluders derived from monocular information can be biased by binocular features.

  3. Visibility of Monocular Symbology in Transparent Head-Mounted Display Applications

    Science.gov (United States)

    2015-07-08

    over terrain, were selected: 0.0 (low), 0.3 (medium), and 3.0 (high) eye-heights/second (0, 36, 324 kph). These ego -motion speeds correspond to no...i.e. thresholds worsen more than might be predicted by lack of summation alone). However, ego -motion does not appear to increase this suppression...E., Levi, D., Harwerth, R. & White, J. Color vision is altered during the suppression phase of binocular rivalry. Science (80-. ). 218, 802–804

  4. Aerial vehicles collision avoidance using monocular vision

    Science.gov (United States)

    Balashov, Oleg; Muraviev, Vadim; Strotov, Valery

    2016-10-01

    In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

  5. Binocular function during unequal monocular input.

    Science.gov (United States)

    Kim, Taekjun; Freeman, Ralph D

    2017-02-01

    The fine task of stereoscopic depth discrimination in human subjects requires a functional binocular system. Behavioral investigations show that relatively small binocular abnormalities can diminish stereoscopic acuity. Clinical evaluations are consistent with this observation. Neurons in visual cortex represent the first stage of processing of the binocular system. Cells at this level are generally acutely sensitive to differences in relative depth. However, an apparent paradox in previous work demonstrates that tuning for binocular disparities remains relatively constant even when large contrast differences are imposed between left and right eye stimuli. This implies a range of neural binocular function that is at odds with behavioral findings. To explore this inconsistency, we have conducted psychophysical tests by which human subjects view vertical sinusoidal gratings drifting in opposite directions to left and right eyes. If the opposite drifting gratings are integrated in visual cortex, as wave theory and neurophysiological data predict, the subjects should perceive a fused stationary grating that is counter-phasing in place. However, this behavioral combination may not occur if there are differences in contrast and therefore signal strength between left and right eye stimuli. As expected for the control condition, our results show fused counter-phase perception for equal inter-ocular grating contrasts. Our experimental tests show a striking retention of counter-phase perception even for relatively large differences in inter-ocular contrast. This finding demonstrates that binocular integration, although relatively coarse, can occur during substantial differences in left and right eye signal strength. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. Monocular camera and IMU integration for indoor position estimation.

    Science.gov (United States)

    Zhang, Yinlong; Tan, Jindong; Zeng, Ziming; Liang, Wei; Xia, Ye

    2014-01-01

    This paper presents a monocular camera (MC) and inertial measurement unit (IMU) integrated approach for indoor position estimation. Unlike the traditional estimation methods, we fix the monocular camera downward to the floor and collect successive frames where textures are orderly distributed and feature points robustly detected, rather than using forward oriented camera in sampling unknown and disordered scenes with pre-determined frame rate and auto-focus metric scale. Meanwhile, camera adopts the constant metric scale and adaptive frame rate determined by IMU data. Furthermore, the corresponding distinctive image feature point matching approaches are employed for visual localizing, i.e., optical flow for fast motion mode; Canny Edge Detector & Harris Feature Point Detector & Sift Descriptor for slow motion mode. For superfast motion and abrupt rotation where images from camera are blurred and unusable, the Extended Kalman Filter is exploited to estimate IMU outputs and to derive the corresponding trajectory. Experimental results validate that our proposed method is effective and accurate in indoor positioning. Since our system is computationally efficient and in compact size, it's well suited for visually impaired people indoor navigation and wheelchaired people indoor localization.

  7. Surface formation and depth in monocular scene perception.

    Science.gov (United States)

    Albert, M K

    1999-01-01

    The visual perception of monocular stimuli perceived as 3-D objects has received considerable attention from researchers in human and machine vision. However, most previous research has focused on how individual 3-D objects are perceived. Here this is extended to a study of how the structure of 3-D scenes containing multiple, possibly disconnected objects and features is perceived. Da Vinci stereopsis, stereo capture, and other surface formation and interpolation phenomena in stereopsis and structure-from-motion suggest that small features having ambiguous depth may be assigned depth by interpolation with features having unambiguous depth. I investigated whether vision may use similar mechanisms to assign relative depth to multiple objects and features in sparse monocular images, such as line drawings, especially when other depth cues are absent. I propose that vision tends to organize disconnected objects and features into common surfaces to construct 3-D-scene interpretations. Interpolations that are too weak to generate a visible surface percept may still be strong enough to assign relative depth to objects within a scene. When there exists more than one possible surface interpolation in a scene, the visual system's preference for one interpolation over another seems to be influenced by a number of factors, including: (i) proximity, (ii) smoothness, (iii) a preference for roughly frontoparallel surfaces and 'ground' surfaces, (iv) attention and fixation, and (v) higher-level factors. I present a variety of demonstrations and an experiment to support this surface-formation hypothesis.

  8. A Novel Metric Online Monocular SLAM Approach for Indoor Applications

    Directory of Open Access Journals (Sweden)

    Yongfei Li

    2016-01-01

    Full Text Available Monocular SLAM has attracted more attention recently due to its flexibility and being economic. In this paper, a novel metric online direct monocular SLAM approach is proposed, which can obtain the metric reconstruction of the scene. In the proposed approach, a chessboard is utilized to provide initial depth map and scale correction information during the SLAM process. The involved chessboard provides the absolute scale of scene, and it is seen as a bridge between the camera visual coordinate and the world coordinate. The scene is reconstructed as a series of key frames with their poses and correlative semidense depth maps, using a highly accurate pose estimation achieved by direct grid point-based alignment. The estimated pose is coupled with depth map estimation calculated by filtering over a large number of pixelwise small-baseline stereo comparisons. In addition, this paper formulates the scale-drift model among key frames and the calibration chessboard is used to correct the accumulated pose error. At the end of this paper, several indoor experiments are conducted. The results suggest that the proposed approach is able to achieve higher reconstruction accuracy when compared with the traditional LSD-SLAM approach. And the approach can also run in real time on a commonly used computer.

  9. Human Pose Estimation from Monocular Images: A Comprehensive Survey

    Directory of Open Access Journals (Sweden)

    Wenjuan Gong

    2016-11-01

    Full Text Available Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing. Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.

  10. (-)-Bornyl acetate induces autonomic relaxation and reduces arousal level after visual display terminal work without any influences of task performance in low-dose condition.

    Science.gov (United States)

    Matsubara, Eri; Fukagawa, Mio; Okamoto, Tsuyoshi; Ohnuki, Koichiro; Shimizu, Kuniyoshi; Kondo, Ryuichiro

    2011-04-01

    (-)-Bornyl acetate is the main volatile constituent in numerous conifer oils and has a camphoraceous, pine-needle-like odor. It is frequently used as the conifer needle composition in soap, bath products, room sprays, and pharmaceutical products. However, the psychophysiological effects of (-)-bornyl acetate remained unclear. We investigated the effects of breathing air mixed with (-)-bornyl acetate at different doses (low-dose and high-dose conditions) on the individuals during and after VDT (visual display terminal) work using a visual discrimination task. The amounts of (-)-bornyl acetate through our odorant delivery system for 40 min were 279.4 µg in the low-dose and 716.3 µg in the high-dose (-)-bornyl acetate condition. (-)-Bornyl acetate induced changes of autonomic nervous system for relaxation and reduced arousal level after VDT work without any influences of task performance in low-dose condition, but not in high-dose condition.

  11. Relationship between monocularly deprivation and amblyopia rats and visual system development

    Institute of Scientific and Technical Information of China (English)

    Yu Ma

    2014-01-01

    Objective:To explore the changes of lateral geniculate body and visual cortex in monocular strabismus and form deprived amblyopic rat, and visual development plastic stage and visual plasticity in adult rats.Methods:A total of60SD rats ages13 d were randomly divided intoA, B,C three groups with20 in each group, groupA was set as the normal control group without any processing, groupB was strabismus amblyopic group, using the unilateral extraocular rectus resection to establish the strabismus amblyopia model, groupC was monocular form deprivation amblyopia group using unilateral eyelid edge resection+ lid suture.At visual developmental early phase(P25), meta phase(P35), late phase(P45) and adult phase(P120), the lateral geniculate body and visual cortex area17 of five rats in each group were exacted forC-fosImmunocytochemistry. Neuron morphological changes in lateral geniculate body and visual cortex was observed, the positive neurons differences ofC-fos expression induced by light stimulation was measured in each group, and the condition of radiation development ofP120 amblyopic adult rats was observed.Results:In groupsB andC,C-fos positive cells were significantly lower thanthe control group atP25(P0.05),C-fos protein positive cells level of groupB was significantly lower than that of groupA(P<0.05).The binoculusC-fos protein positive cells level of groupsB andC were significantly higher than that of control group atP35,P45 andP120 with statistically significant differences(P<0.05).Conclusions:The increasing ofC-fos expression in geniculate body and visual cortex neurons of adult amblyopia suggests the visual cortex neurons exist a certain degree of visual plasticity.

  12. Candida-streptococcal mucosal biofilms display distinct structural and virulence characteristics depending on growth conditions and hyphal morphotypes.

    Science.gov (United States)

    Bertolini, M M; Xu, H; Sobue, T; Nobile, C J; Del Bel Cury, A A; Dongari-Bagtzoglou, A

    2015-08-01

    Candida albicans and streptococci of the mitis group form communities in multiple oral sites, where moisture and nutrient availability can change spatially or temporally. This study evaluated structural and virulence characteristics of Candida-streptococcal biofilms formed on moist or semidry mucosal surfaces, and tested the effects of nutrient availability and hyphal morphotype on dual-species biofilms. Three-dimensional models of the oral mucosa formed by immortalized keratinocytes on a fibroblast-embedded collagenous matrix were used. Infections were carried out using Streptococcus oralis strain 34, in combination with a C. albicans wild-type strain, or pseudohyphal-forming mutant strains. Increased moisture promoted a homogeneous surface biofilm by C. albicans. Dual biofilms had a stratified structure, with streptococci growing in close contact with the mucosa and fungi growing on the bacterial surface. Under semidry conditions, Candida formed localized foci of dense growth, which promoted focal growth of streptococci in mixed biofilms. Candida biofilm biovolume was greater under moist conditions, albeit with minimal tissue invasion, compared with semidry conditions. Supplementing the infection medium with nutrients under semidry conditions intensified growth, biofilm biovolume and tissue invasion/damage, without changing biofilm structure. Under these conditions, the pseudohyphal mutants and S. oralis formed defective superficial biofilms, with most bacteria in contact with the epithelial surface, below a pseudohyphal mass, resembling biofilms growing in a moist environment. The presence of S. oralis promoted fungal invasion and tissue damage under all conditions. We conclude that moisture, nutrient availability, hyphal morphotype and the presence of commensal bacteria influence the architecture and virulence characteristics of mucosal fungal biofilms.

  13. Saccade amplitude disconjugacy induced by aniseikonia: role of monocular depth cues.

    Science.gov (United States)

    Pia Bucci, M; Kapoula, Z; Eggert, T

    1999-09-01

    The conjugacy of saccades is rapidly modified if the images are made unequal for the two eyes. Disconjugacy persists even in the absence of disparity which indicates learning. Binocular visual disparity is a major cue to depth and is believed to drive the disconjugacy of saccades to aniseikonic images. The goal of the present study was to test whether monocular depth cues can also influence the disconjugacy of saccades. Three experiments were performed in which subjects were exposed for 15-20 min to a 10% image size inequality. Three different images were used: a grid that contained a single monocular depth cue strongly indicating a frontoparallel plane; a random-dot pattern that contained a less prominent monocular depth cue (absence of texture gradient) which also indicates the frontoparallel plane; and a complex image with several overlapping geometric forms that contained a variety of monocular depth cues. Saccades became disconjugate in all three experiments. The disconjugacy was larger and more persistent for the experiment using the random-dot pattern that had the least prominent monocular depth cues. The complex image which had a large variety of monocular depth cues produced the most variable and less persistent disconjugacy. We conclude that the monocular depth cues modulate the disconjugacy of saccades stimulated by the disparity of aniseikonic images.

  14. Optimizing the refolding conditions of self-assembling polypeptide nanoparticles that serve as repetitive antigen display systems.

    Science.gov (United States)

    Yang, Yongkun; Ringler, Philippe; Müller, Shirley A; Burkhard, Peter

    2012-01-01

    Nanoparticles show great promise as potent vaccine candidates since they are readily taken up by the antigen presenting cells of the immune system. The particle size and the density of the B cell epitopes on the surface of the particles greatly influences the strength of the humoral immune response. We have developed a novel type of nanoparticle composed of peptide building blocks (Raman et al., 2006) and have used such particles to design vaccines against malaria and SARS (Kaba et al., 2009; Pimentel et al., 2009). Here we investigate the biophysical properties and the refolding conditions of a prototype of these self-assembling polypeptide nanoparticles (SAPNs). SAPNs are formed from a peptide containing a pentameric and a trimeric coiled-coil domain. At near physiological conditions the peptide self-assembles into about 27 nm, roughly spherical SAPNs. The average size of the SAPNs increases with the salt concentration. The optimal pH for their formation is between 7.5 and 8.5, while aggregation occurs at lower and higher values. A glycerol concentration of about 5% v/v is required for the formation of SAPNs with regular spherical shapes. These studies will help to optimize the immunological properties of SAPNs.

  15. Stereoscopic 3D-scene synthesis from a monocular camera with an electrically tunable lens

    Science.gov (United States)

    Alonso, Julia R.

    2016-09-01

    3D-scene acquisition and representation is important in many areas ranging from medical imaging to visual entertainment application. In this regard, optical imaging acquisition combined with post-capture processing algorithms enable the synthesis of images with novel viewpoints of a scene. This work presents a new method to reconstruct a pair of stereoscopic images of a 3D-scene from a multi-focus image stack. A conventional monocular camera combined with an electrically tunable lens (ETL) is used for image acquisition. The captured visual information is reorganized considering a piecewise-planar image formation model with a depth-variant point spread function (PSF) along with the known focusing distances at which the images of the stack were acquired. The consideration of a depth-variant PSF allows the application of the method to strongly defocused multi-focus image stacks. Finally, post-capture perspective shifts, presenting each eye the corresponding viewpoint according to the disparity, are generated by simulating the displacement of a synthetic pinhole camera. The procedure is performed without estimation of the depth-map or segmentation of the in-focus regions. Experimental results for both real and synthetic data images are provided and presented as anaglyphs, but it could easily be implemented in 3D displays based in parallax barrier or polarized light.

  16. Novel approach for mobile robot localization using monocular vision

    Science.gov (United States)

    Zhong, Zhiguang; Yi, Jianqiang; Zhao, Dongbin; Hong, Yiping

    2003-09-01

    This paper presents a novel approach for mobile robot localization using monocular vision. The proposed approach locates a robot relative to the target to which the robot moves. Two points are selected from the target as two feature points. Once the coordinates in an image of the two feature points are detected, the position and motion direction of the robot can be determined according to the detected coordinates. Unlike those reported geometry pose estimation or landmarks matching methods, this approach requires neither artificial landmarks nor an accurate map of indoor environment. It needs less computation and can simplify greatly the localization problem. The validity and flexibility of the proposed approach is demonstrated by experiments performed on real images. The results show that this new approach is not only simple and flexible but also has high localization precision.

  17. A low cost PSD-based monocular motion capture system

    Science.gov (United States)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  18. Markerless monocular tracking system for guided external eye surgery.

    Science.gov (United States)

    Monserrat, C; Rupérez, M J; Alcañiz, M; Mataix, J

    2014-12-01

    This paper presents a novel markerless monocular tracking system aimed at guiding ophthalmologists during external eye surgery. This new tracking system performs a very accurate tracking of the eye by detecting invariant points using only textures that are present in the sclera, i.e., without using traditional features like the pupil and/or cornea reflections, which remain partially or totally occluded in most surgeries. Two known algorithms that compute invariant points and correspondences between pairs of images were implemented in our system: Scalable Invariant Feature Transforms (SIFT) and Speed Up Robust Features (SURF). The results of experiments performed on phantom eyes show that, with either algorithm, the developed system tracks a sphere at a 360° rotation angle with an error that is lower than 0.5%. Some experiments have also been carried out on images of real eyes showing promising behavior of the system in the presence of blood or surgical instruments during real eye surgery.

  19. Monocular vision based navigation method of mobile robot

    Institute of Scientific and Technical Information of China (English)

    DONG Ji-wen; YANG Sen; LU Shou-yin

    2009-01-01

    A trajectory tracking method is presented for the visual navigation of the monocular mobile robot. The robot move along line trajectory drawn beforehand, recognized and stop on the stop-sign to finish special task. The robot uses a forward looking colorful digital camera to capture information in front of the robot, and by the use of HSI model partition the trajectory and the stop-sign out. Then the "sampling estimate" method was used to calculate the navigation parameters. The stop-sign is easily recognized and can identify 256 different signs. Tests indicate that the method can fit large-scale intensity of brightness and has more robustness and better real-time character.

  20. Monocular Obstacle Detection for Real-World Environments

    Science.gov (United States)

    Einhorn, Erik; Schroeter, Christof; Gross, Horst-Michael

    In this paper, we present a feature based approach for monocular scene reconstruction based on extended Kaiman filters (EKF). Our method processes a sequence of images taken by a single camera mounted in front of a mobile robot. Using various techniques we are able to produce a precise reconstruction that is almost free from outliers and therefore can be used for reliable obstacle detection and avoidance. In real-world field tests we show that the presented approach is able to detect obstacles that can not be seen by other sensors, such as laser range finders. Furthermore, we show that visual obstacle detection combined with a laser range finder can increase the detection rate of obstacles considerably, allowing the autonomous use of mobile robots in complex public and home environments.

  1. Monocular 3D scene reconstruction at absolute scale

    Science.gov (United States)

    Wöhler, Christian; d'Angelo, Pablo; Krüger, Lars; Kuhl, Annika; Groß, Horst-Michael

    In this article we propose a method for combining geometric and real-aperture methods for monocular three-dimensional (3D) reconstruction of static scenes at absolute scale. Our algorithm relies on a sequence of images of the object acquired by a monocular camera of fixed focal setting from different viewpoints. Object features are tracked over a range of distances from the camera with a small depth of field, leading to a varying degree of defocus for each feature. Information on absolute depth is obtained based on a Depth-from-Defocus approach. The parameters of the point spread functions estimated by Depth-from-Defocus are used as a regularisation term for Structure-from-Motion. The reprojection error obtained from bundle adjustment and the absolute depth error obtained from Depth-from-Defocus are simultaneously minimised for all tracked object features. The proposed method yields absolutely scaled 3D coordinates of the scene points without any prior knowledge about scene structure and camera motion. We describe the implementation of the proposed method both as an offline and as an online algorithm. Evaluating the algorithm on real-world data, we demonstrate that it yields typical relative scale errors of a few per cent. We examine the influence of random effects, i.e. the noise of the pixel grey values, and systematic effects, caused by thermal expansion of the optical system or by inclusion of strongly blurred images, on the accuracy of the 3D reconstruction result. Possible applications of our approach are in the field of industrial quality inspection; in particular, it is preferable to stereo cameras in industrial vision systems with space limitations or where strong vibrations occur.

  2. Military display market segment: wearable and portable

    Science.gov (United States)

    Desjardins, Daniel D.; Hopper, Darrel G.

    2003-09-01

    The military display market (MDM) is analyzed in terms of one of its segments, wearable and portable displays. Wearable and portable displays are those embedded in gear worn or carried by warfighters. Categories include hand-mobile (direct-view and monocular/binocular), palm-held, head/helmet-mounted, body-strapped, knee-attached, lap-born, neck-lanyard, and pocket/backpack-stowed. Some 62 fielded and developmental display sizes are identified in this wearable/portable MDM segment. Parameters requiring special consideration, such as weight, luminance ranges, light emission, viewing angles, and chromaticity coordinates, are summarized and compared. Ruggedized commercial versus commercial off-the-shelf designs are contrasted; and a number of custom displays are also found in this MDM category. Display sizes having aggregate quantities of 5,000 units or greater or having 2 or more program applications are identified. Wearable and portable displays are also analyzed by technology (LCD, LED, CRT, OLED and plasma). The technical specifications and program history of several high-profile military programs are discussed to provide a systems context for some representative displays and their function. As of August 2002 our defense-wide military display market study has documented 438,882 total display units distributed across 1,163 display sizes and 438 weapon systems. Wearable and portable displays account for 202,593 displays (46% of total DoD) yet comprise just 62 sizes (5% of total DoD) in 120 weapons systems (27% of total DoD). Some 66% of these wearable and portable applications involve low information content displays comprising just a few characters in one color; however, there is an accelerating trend towards higher information content units capable of showing changeable graphics, color and video.

  3. Integration of monocular motion signals and the analysis of interocular velocity differences for the perception of motion-in-depth.

    Science.gov (United States)

    Shioiri, Satoshi; Kakehi, Daisuke; Tashiro, Tomoyoshi; Yaguchi, Hirohisa

    2009-12-09

    We investigated how the mechanism for perceiving motion-in-depth based on interocular velocity differences (IOVDs) integrates signals from the motion spatial frequency (SF) channels. We focused on the question whether this integration is implemented before or after the comparison of the velocity signals from the two eyes. We measured spatial frequency selectivity of the MAE of motion in depth (3D MAE). The 3D MAE showed little spatial frequency selectivity, whereas the 2D lateral MAE showed clear spatial frequency selectivity in the same condition. This indicates that the outputs of the monocular motion SF channels are combined before analyzing the IOVD. The presumption was confirmed by the disappearance of the 3D MAE after exposure to superimposed gratings with different spatial frequencies moving in opposite directions. The direction of the 2D MAE depended on the test spatial frequency in the same condition. These results suggest that the IOVD is calculated at a relatively later stage of the motion analysis, and that some monocular information is preserved even after the integration of the motion SF channel outputs.

  4. Spatial constraints of stereopsis in video displays

    Science.gov (United States)

    Schor, Clifton

    1989-01-01

    Recent development in video technology, such as the liquid crystal displays and shutters, have made it feasible to incorporate stereoscopic depth into the 3-D representations on 2-D displays. However, depth has already been vividly portrayed in video displays without stereopsis using the classical artists' depth cues described by Helmholtz (1866) and the dynamic depth cues described in detail by Ittleson (1952). Successful static depth cues include overlap, size, linear perspective, texture gradients, and shading. Effective dynamic cues include looming (Regan and Beverly, 1979) and motion parallax (Rogers and Graham, 1982). Stereoscopic depth is superior to the monocular distance cues under certain circumstances. It is most useful at portraying depth intervals as small as 5 to 10 arc secs. For this reason it is extremely useful in user-video interactions such as telepresence. Objects can be manipulated in 3-D space, for example, while a person who controls the operations views a virtual image of the manipulated object on a remote 2-D video display. Stereopsis also provides structure and form information in camouflaged surfaces such as tree foliage. Motion parallax also reveals form; however, without other monocular cues such as overlap, motion parallax can yield an ambiguous perception. For example, a turning sphere, portrayed as solid by parallax can appear to rotate either leftward or rightward. However, only one direction of rotation is perceived when stereo-depth is included. If the scene is static, then stereopsis is the principal cue for revealing the camouflaged surface structure. Finally, dynamic stereopsis provides information about the direction of motion in depth (Regan and Beverly, 1979). Clearly there are many spatial constraints, including spatial frequency content, retinal eccentricity, exposure duration, target spacing, and disparity gradient, which - when properly adjusted - can greatly enhance stereodepth in video displays.

  5. Reactivation of thalamocortical plasticity by dark exposure during recovery from chronic monocular deprivation

    Science.gov (United States)

    Montey, Karen L.; Quinlan, Elizabeth M.

    2015-01-01

    Chronic monocular deprivation induces severe amblyopia that is resistant to spontaneous reversal in adulthood. However, dark exposure initiated in adulthood reactivates synaptic plasticity in the visual cortex and promotes recovery from chronic monocular deprivation. Here we show that chronic monocular deprivation significantly decreases the strength of feedforward excitation and significantly decreases the density of dendritic spines throughout the deprived binocular visual cortex. Dark exposure followed by reverse deprivation significantly enhances the strength of thalamocortical synaptic transmission and the density of dendritic spines on principle neurons throughout the depth of the visual cortex. Thus dark exposure reactivates widespread synaptic plasticity in the adult visual cortex, including at thalamocortical synapses, during the recovery from chronic monocular deprivation. PMID:21587234

  6. Dynamic object recognition and tracking of mobile robot by monocular vision

    Science.gov (United States)

    Liu, Lei; Wang, Yongji

    2007-11-01

    Monocular Vision is widely used in mobile robot's motion control for its simple structure and easy using. An integrated description to distinguish and tracking the specified color targets dynamically and precisely by the Monocular Vision based on the imaging principle is the major topic of the paper. The mainline is accordance with the mechanisms of visual processing strictly, including the pretreatment and recognition processes. Specially, the color models are utilized to decrease the influence of the illumination in the paper. Some applied algorithms based on the practical application are used for image segmentation and clustering. After recognizing the target, however the monocular camera can't get depth information directly, 3D Reconstruction Principle is used to calculate the distance and direction from robot to target. To emend monocular camera reading, the laser is used after vision measuring. At last, a vision servo system is designed to realize the robot's dynamic tracking to the moving target.

  7. Apparent motion of monocular stimuli in different depth planes with lateral head movements.

    Science.gov (United States)

    Shimono, K; Tam, W J; Ono, H

    2007-04-01

    A stationary monocular stimulus appears to move concomitantly with lateral head movements when it is embedded in a stereogram representing two front-facing rectangular areas, one above the other at two different distances. In Experiment 1, we found that the extent of perceived motion of the monocular stimulus covaried with the amplitude of head movement and the disparity between the two rectangular areas (composed of random dots). In Experiment 2, we found that the extent of perceived motion of the monocular stimulus was reduced compared to that in Experiment 1 when the rectangular areas were defined only by an outline rather than by random dots. These results are discussed using the hypothesis that a monocular stimulus takes on features of the binocular surface area in which it is embedded and is perceived as though it were treated as a binocular stimulus with regards to its visual direction and visual depth.

  8. The effect of monocular depth cues on the detection of moving objects by moving observers

    National Research Council Canada - National Science Library

    Royden, Constance S; Parsons, Daniel; Travatello, Joshua

    2016-01-01

    ... and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects...

  9. The role of monocularly visible regions in depth and surface perception.

    Science.gov (United States)

    Harris, Julie M; Wilcox, Laurie M

    2009-11-01

    The mainstream of binocular vision research has long been focused on understanding how binocular disparity is used for depth perception. In recent years, researchers have begun to explore how monocular regions in binocularly viewed scenes contribute to our perception of the three-dimensional world. Here we review the field as it currently stands, with a focus on understanding the extent to which the role of monocular regions in depth perception can be understood using extant theories of binocular vision.

  10. Compression of perceived depth as a function of viewing conditions.

    Science.gov (United States)

    Nolan, Ann; Delshad, Rebecca; Sedgwick, Harold A

    2012-12-01

    The magnification produced by a low-vision telescope has been shown to compress perceived depth. Looking through such a telescope, however, also entails monocular viewing and visual field restriction, and these viewing conditions, taken together, were also shown to compress perceived depth. The research presented here quantitatively explores the separate effects of each of these viewing conditions on perceived depth. Participants made verbal estimates of the length, relative to the width, of rectangles presented in a controlled table-top setting. In experiment 1, the rectangles were either in the frontal plane or receding in depth, and they were viewed either binocularly or monocularly with an unrestricted field of view (FOV). In experiment 2, the rectangles were in depth and were viewed monocularly with an unrestricted FOV, a moderately (40 degrees) restricted FOV, or a severely (11.5 degrees) restricted FOV. Viewed in the frontal plane, either monocularly or binocularly, the vertical dimension was expanded by about 10%. Viewed in depth, with an unrestricted FOV, the (projectively vertical) depth dimension was compressed by 12% when seen binocularly or 24% when seen monocularly. A monocular moderately (40 degrees) restricted FOV was very similar to the unrestricted monocular FOV. A severely (11.5 degrees) restricted FOV, however, produced a substantially greater 44% compression of perceived depth. Even under near-optimal binocular viewing conditions, there is some compression of perceived depth. The compression found when viewing through a low-vision telescope has been shown to be substantially greater. In addition to the previously demonstrated contribution of telescopic magnification to this effect, we have now shown that the viewing conditions of monocularity and severely restricted (11.5 degrees) FOV can each produce substantial increments in the compression of perceived depth. We found, however, that a moderately restricted (40 degrees) FOV does not increase

  11. Real-time drogue recognition and 3D locating for UAV autonomous aerial refueling based on monocular machine vision

    Institute of Scientific and Technical Information of China (English)

    Wang Xufeng; Kong Xingwei; Zhi Jianhui; Chen Yong; Dong Xinmin

    2015-01-01

    Drogue recognition and 3D locating is a key problem during the docking phase of the autonomous aerial refueling (AAR). To solve this problem, a novel and effective method based on monocular vision is presented in this paper. Firstly, by employing computer vision with red-ring-shape feature, a drogue detection and recognition algorithm is proposed to guarantee safety and ensure the robustness to the drogue diversity and the changes in environmental condi-tions, without using a set of infrared light emitting diodes (LEDs) on the parachute part of the dro-gue. Secondly, considering camera lens distortion, a monocular vision measurement algorithm for drogue 3D locating is designed to ensure the accuracy and real-time performance of the system, with the drogue attitude provided. Finally, experiments are conducted to demonstrate the effective-ness of the proposed method. Experimental results show the performances of the entire system in contrast with other methods, which validates that the proposed method can recognize and locate the drogue three dimensionally, rapidly and precisely.

  12. Projection displays

    Science.gov (United States)

    Chiu, George L.; Yang, Kei H.

    1998-08-01

    Projection display in today's market is dominated by cathode ray tubes (CRTs). Further progress in this mature CRT projector technology will be slow and evolutionary. Liquid crystal based projection displays have gained rapid acceptance in the business market. New technologies are being developed on several fronts: (1) active matrix built from polysilicon or single crystal silicon; (2) electro- optic materials using ferroelectric liquid crystal, polymer dispersed liquid crystals or other liquid crystal modes, (3) micromechanical-based transducers such as digital micromirror devices, and grating light valves, (4) high resolution displays to SXGA and beyond, and (5) high brightness. This article reviews the projection displays from a transducer technology perspective along with a discussion of markets and trends.

  13. Content and context of monocular regions determine perceived depth in random dot, unpaired background and phantom stereograms.

    Science.gov (United States)

    Grove, Philip M; Gillam, Barbara; Ono, Hiroshi

    2002-07-01

    Perceived depth was measured for three-types of stereograms with the colour/texture of half-occluded (monocular) regions either similar to or dissimilar to that of binocular regions or background. In a two-panel random dot stereogram the monocular region was filled with texture either similar or different to the far panel or left blank. In unpaired background stereograms the monocular region either matched the background or was different in colour or texture and in phantom stereograms the monocular region matched the partially occluded object or was a different colour or texture. In all three cases depth was considerably impaired when the monocular texture did not match either the background or the more distant surface. The content and context of monocular regions as well as their position are important in determining their role as occlusion cues and thus in three-dimensional layout. We compare coincidence and accidental view accounts of these effects.

  14. Development of a monocular vision system for robotic drilling

    Institute of Scientific and Technical Information of China (English)

    Wei-dong ZHU; Biao MEI; Guo-rui YAN; Ying-lin KE

    2014-01-01

    Robotic drilling for aerospace structures demands a high positioning accuracy of the robot, which is usually achieved through error measurement and compensation. In this paper, we report the development of a practical monocular vision system for measurement of the relative error between the drill tool center point (TCP) and the reference hole. First, the principle of relative error measurement with the vision system is explained, followed by a detailed discussion on the hardware components, software components, and system integration. The elliptical contour extraction algorithm is presented for accurate and robust reference hole detection. System calibration is of key importance to the measurement accuracy of a vision system. A new method is proposed for the simultaneous calibration of camera internal parameters and hand-eye relationship with a dedicated calibration board. Extensive measurement experiments have been performed on a robotic drilling system. Experimental results show that the measurement accuracy of the developed vision system is higher than 0.15 mm, which meets the requirement of robotic drilling for aircraft structures.

  15. Deep monocular 3D reconstruction for assisted navigation in bronchoscopy.

    Science.gov (United States)

    Visentini-Scarzanella, Marco; Sugiura, Takamasa; Kaneko, Toshimitsu; Koto, Shinichiro

    2017-07-01

    In bronchoschopy, computer vision systems for navigation assistance are an attractive low-cost solution to guide the endoscopist to target peripheral lesions for biopsy and histological analysis. We propose a decoupled deep learning architecture that projects input frames onto the domain of CT renderings, thus allowing offline training from patient-specific CT data. A fully convolutional network architecture is implemented on GPU and tested on a phantom dataset involving 32 video sequences and [Formula: see text]60k frames with aligned ground truth and renderings, which is made available as the first public dataset for bronchoscopy navigation. An average estimated depth accuracy of 1.5 mm was obtained, outperforming conventional direct depth estimation from input frames by 60%, and with a computational time of [Formula: see text]30 ms on modern GPUs. Qualitatively, the estimated depth and renderings closely resemble the ground truth. The proposed method shows a novel architecture to perform real-time monocular depth estimation without losing patient specificity in bronchoscopy. Future work will include integration within SLAM systems and collection of in vivo datasets.

  16. Global localization from monocular SLAM on a mobile phone.

    Science.gov (United States)

    Ventura, Jonathan; Arth, Clemens; Reitmayr, Gerhard; Schmalstieg, Dieter

    2014-04-01

    We propose the combination of a keyframe-based monocular SLAM system and a global localization method. The SLAM system runs locally on a camera-equipped mobile client and provides continuous, relative 6DoF pose estimation as well as keyframe images with computed camera locations. As the local map expands, a server process localizes the keyframes with a pre-made, globally-registered map and returns the global registration correction to the mobile client. The localization result is updated each time a keyframe is added, and observations of global anchor points are added to the client-side bundle adjustment process to further refine the SLAM map registration and limit drift. The end result is a 6DoF tracking and mapping system which provides globally registered tracking in real-time on a mobile device, overcomes the difficulties of localization with a narrow field-of-view mobile phone camera, and is not limited to tracking only in areas covered by the offline reconstruction.

  17. Monocular visual scene understanding: understanding multi-object traffic scenes.

    Science.gov (United States)

    Wojek, Christian; Walk, Stefan; Roth, Stefan; Schindler, Konrad; Schiele, Bernt

    2013-04-01

    Following recent advances in detection, context modeling, and tracking, scene understanding has been the focus of renewed interest in computer vision research. This paper presents a novel probabilistic 3D scene model that integrates state-of-the-art multiclass object detection, object tracking and scene labeling together with geometric 3D reasoning. Our model is able to represent complex object interactions such as inter-object occlusion, physical exclusion between objects, and geometric context. Inference in this model allows us to jointly recover the 3D scene context and perform 3D multi-object tracking from a mobile observer, for objects of multiple categories, using only monocular video as input. Contrary to many other approaches, our system performs explicit occlusion reasoning and is therefore capable of tracking objects that are partially occluded for extended periods of time, or objects that have never been observed to their full extent. In addition, we show that a joint scene tracklet model for the evidence collected over multiple frames substantially improves performance. The approach is evaluated for different types of challenging onboard sequences. We first show a substantial improvement to the state of the art in 3D multipeople tracking. Moreover, a similar performance gain is achieved for multiclass 3D tracking of cars and trucks on a challenging dataset.

  18. Mobile Robot Hierarchical Simultaneous Localization and Mapping Using Monocular Vision

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A hierarchical mobile robot simultaneous localization and mapping (SLAM) method that allows us to obtain accurate maps was presented. The local map level is composed of a set of local metric feature maps that are guaranteed to be statistically independent. The global level is a topological graph whose arcs are labeled with the relative location between local maps. An estimation of these relative locations is maintained with local map alignment algorithm, and more accurate estimation is calculated through a global minimization procedure using the loop closure constraint. The local map is built with Rao-Blackwellised particle filter (RBPF), where the particle filter is used to extending the path posterior by sampling new poses. The landmark position estimation and update is implemented through extended Kalman filter (EKF). Monocular vision mounted on the robot tracks the 3D natural point landmarks, which are structured with matching scale invariant feature transform (SIFT) feature pairs. The matching for multi-dimension SIFT features is implemented with a KD-tree in the time cost of O(lbN). Experiment results on Pioneer mobile robot in a real indoor environment show the superior performance of our proposed method.

  19. Surgical outcome in monocular elevation deficit: A retrospective interventional study

    Directory of Open Access Journals (Sweden)

    Bandyopadhyay Rakhi

    2008-01-01

    Full Text Available Background and Aim: Monocular elevation deficiency (MED is characterized by a unilateral defect in elevation, caused by paretic, restrictive or combined etiology. Treatment of this multifactorial entity is therefore varied. In this study, we performed different surgical procedures in patients of MED and evaluated their outcome, based on ocular alignment, improvement in elevation and binocular functions. Study Design: Retrospective interventional study. Materials and Methods: Twenty-eight patients were included in this study, from June 2003 to August 2006. Five patients underwent Knapp procedure, with or without horizontal squint surgery, 17 patients had inferior rectus recession, with or without horizontal squint surgery, three patients had combined inferior rectus recession and Knapp procedure and three patients had inferior rectus recession combined with contralateral superior rectus or inferior oblique surgery. The choice of procedure was based on the results of forced duction test (FDT. Results: Forced duction test was positive in 23 cases (82%. Twenty-four of 28 patients (86% were aligned to within 10 prism diopters. Elevation improved in 10 patients (36% from no elevation above primary position (-4 to only slight limitation of elevation (-1. Five patients had preoperative binocular vision and none gained it postoperatively. No significant postoperative complications or duction abnormalities were observed during the follow-up period. Conclusion: Management of MED depends upon selection of the correct surgical technique based on employing the results of FDT, for a satisfactory outcome.

  20. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  1. 3D Reconstruction from a Single Still Image Based on Monocular Vision of an Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available we propose a framework of combining Machine Learning with Dynamic Optimization for reconstructing scene in 3D automatically from a single still image of unstructured outdoor environment based on monocular vision of an uncalibrated camera. After segmenting image first time, a kind of searching tree strategy based on Bayes rule is used to identify the hierarchy of all areas on occlusion. After superpixel segmenting image second time, the AdaBoost algorithm is applied in the integration detection to the depth of lighting, texture and material. Finally, all the factors above are optimized with constrained conditions, acquiring the whole depthmap of an image. Integrate the source image with its depthmap in point-cloud or bilinear interpolation styles, realizing 3D reconstruction. Experiment in comparisons with typical methods in associated database demonstrates our method improves the reasonability of estimation to the overall 3D architecture of image’s scene to a certain extent. And it does not need any manual assist and any camera model information.

  2. Enhanced monocular visual odometry integrated with laser distance meter for astronaut navigation.

    Science.gov (United States)

    Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin

    2014-03-11

    Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method.

  3. Dichoptic training in adults with amblyopia: Additional stereoacuity gains over monocular training.

    Science.gov (United States)

    Liu, Xiang-Yun; Zhang, Jun-Yun

    2017-08-04

    Dichoptic training is a recent focus of research on perceptual learning in adults with amblyopia, but whether and how dichoptic training is superior to traditional monocular training is unclear. Here we investigated whether dichoptic training could further boost visual acuity and stereoacuity in monocularly well-trained adult amblyopic participants. During dichoptic training the participants used the amblyopic eye to practice a contrast discrimination task, while a band-filtered noise masker was simultaneously presented in the non-amblyopic fellow eye. Dichoptic learning was indexed by the increase of maximal tolerable noise contrast for successful contrast discrimination in the amblyopic eye. The results showed that practice tripled maximal tolerable noise contrast in 13 monocularly well-trained amblyopic participants. Moreover, the training further improved stereoacuity by 27% beyond the 55% gain from previous monocular training, but unchanged visual acuity of the amblyopic eyes. Therefore our dichoptic training method may produce extra gains of stereoacuity, but not visual acuity, in adults with amblyopia after monocular training. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. The precision of binocular and monocular depth judgments in natural settings.

    Science.gov (United States)

    McKee, Suzanne P; Taylor, Douglas G

    2010-08-01

    We measured binocular and monocular depth thresholds for objects presented in a real environment. Observers judged the depth separating a pair of metal rods presented either in relative isolation, or surrounded by other objects, including a textured surface. In the isolated setting, binocular thresholds were greatly superior to the monocular thresholds by as much as a factor of 18. The presence of adjacent objects and textures improved the monocular thresholds somewhat, but the superiority of binocular viewing remained substantial (roughly a factor of 10). To determine whether motion parallax would improve monocular sensitivity for the textured setting, we asked observers to move their heads laterally, so that the viewing eye was displaced by 8-10 cm; this motion produced little improvement in the monocular thresholds. We also compared disparity thresholds measured with the real rods to thresholds measured with virtual images in a standard mirror stereoscope. Surprisingly, for the two naive observers, the stereoscope thresholds were far worse than the thresholds for the real rods-a finding that indicates that stereoscope measurements for unpracticed observers should be treated with caution. With practice, the stereoscope thresholds for one observer improved to almost the precision of the thresholds for the real rods.

  5. Effects of colored light, color of comparison stimulus, and illumination on error in perceived depth with binocular and monocular viewing.

    Science.gov (United States)

    Huang, Kuo-Chen

    2007-06-01

    Two experiments assessed the effects of colored light, color of a comparison stimulus, and illumination on error in perceived depth with binocular and monocular vision. Exp. 1 assessed effects of colored light, color of comparison stimulus, and source of depth cues on error in perceived depth. A total of 29 women and 19 men, Taiwanese college or graduate students ages 20 to 30 years (M=24.0, SD= 2.5) participated; they were randomly divided into five groups, each being assigned to one of five possible colored light conditions. Analyses showed color of the comparison stimulus significantly affected the error in perceived depth, as this error was significantly greater for a red comparison stimulus than for blue and yellow comparison stimuli. Colored light significantly affected error in perceived depth since error under white and yellow light was significantly less than that under green light. Moreover, error in perceived depth under white light was significantly less than that under blue light but not sensitive to white, yellow, and red light. Error in perceived depth for binocular viewing was significantly less than that for monocular viewing but not sex. In Exp. 2, the effect of illumination on error in perceived depth was explored with 21 women and 15 men, Taiwanese college students with a mean age of 19.8 yr. (SD= 1.1). Analysis indicated that illumination significantly affected error in perceived depth, as error for a 40-W condition was significantly greater than under 20- and 60-W conditions, although the latter were not different.

  6. Patterns of non-embolic transient monocular visual field loss.

    Science.gov (United States)

    Petzold, Axel; Islam, Niaz; Plant, G T

    2013-07-01

    The aim of this study was to systematically describe the semiology of non-embolic transient monocular visual field loss (neTMVL). We conducted a retrospective case note analysis of patients from Moorfields Eye Hospital (1995-2007). The variables analysed were age, age of onset, gender, past medical history or family history of migraine, eye affected, onset, duration and offset, perception (pattern, positive and negative symptoms), associated headache and autonomic symptoms, attack frequency, and treatment response to nifedipine. We identified 77 patients (28 male and 49 female). Mean age of onset was 37 years (range 14-77 years). The neTMVL was limited to the right eye in 36 % to the left in 47 % and occurred independently in either eye in 5 % of cases. A past medical history of migraine was present in 12 % and a family history in 8 %. Headache followed neTMVL in 14 % and was associated with autonomic features in 3 %. The neTMB was perceived as grey in 35 %, white in 21 %, black in 16 % and as phosphenes in 9 %. Most frequently neTMVL was patchy 20 %. Recovery of vision frequently resembled attack onset in reverse. In 3 patients without associated headache the loss of vision was permanent. Treatment with nifedipine was initiated in 13 patients with an attack frequency of more than one per week and reduced the attack frequency in all. In conclusion, this large series of patients with neTMVL permits classification into five types of reversible visual field loss (grey, white, black, phosphenes, patchy). Treatment response to nifidipine suggests some attacks to be caused by vasospasm.

  7. The contribution of monocular depth cues to scene perception by pigeons.

    Science.gov (United States)

    Cavoto, Brian R; Cook, Robert G

    2006-07-01

    The contributions of different monocular depth cues to performance of a scene perception task were investigated in 4 pigeons. They discriminated the sequential depth ordering of three geometric objects in computer-rendered scenes. The orderings of these objects were specified by the combined presence or absence of the pictorial cues of relative density, occlusion, and relative size. In Phase 1, the pigeons learned the task as a direct function of the number of cues present. The three monocular cues contributed equally to the discrimination. Phase 2 established that differential shading on the objects provided an additional discriminative cue. These results suggest that the pigeon visual system is sensitive to many of the same monocular depth cues that are known to be used by humans. The theoretical implications for a comparative psychology of picture processing are considered.

  8. Refractive error and monocular viewing strengthen the hollow-face illusion.

    Science.gov (United States)

    Hill, Harold; Palmisano, Stephen; Matthews, Harold

    2012-01-01

    We measured the strength of the hollow-face illusion--the 'flipping distance' at which perception changes between convex and concave--as a function of a lens-induced 3 dioptre refractive error and monocular/binocular viewing. Refractive error and closing one eye both strengthened the illusion to approximately the same extent. The illusion was weakest viewed binocularly without refractive error and strongest viewed monocularly with it. This suggests binocular cues disambiguate the illusion at greater distances than monocular cues, but that both are disrupted by refractive error. We argue that refractive error leaves the ambiguous low-spatial-frequency shading information critical to the illusion largely unaffected while disrupting other, potentially disambiguating, depth/distance cues.

  9. A new combination of monocular and stereo cues for dense disparity estimation

    Science.gov (United States)

    Mao, Miao; Qin, Kaihuai

    2013-07-01

    Disparity estimation is a popular and important topic in computer vision and robotics. Stereo vision is commonly done to complete the task, but most existing methods fail in textureless regions and utilize numerical methods to interpolate into these regions. Monocular features are usually ignored, which may contain helpful depth information. We proposed a novel method combining monocular and stereo cues to compute dense disparities from a pair of images. The whole image regions are categorized into reliable regions (textured and unoccluded) and unreliable regions (textureless or occluded). Stable and accurate disparities can be gained at reliable regions. Then for unreliable regions, we utilize k-means to find the most similar reliable regions in terms of monocular cues. Our method is simple and effective. Experiments show that our method can generate a more accurate disparity map than existing methods from images with large textureless regions, e.g. snow, icebergs.

  10. Eye movements in chameleons are not truly independent - evidence from simultaneous monocular tracking of two targets.

    Science.gov (United States)

    Katz, Hadas Ketter; Lustig, Avichai; Lev-Ari, Tidhar; Nov, Yuval; Rivlin, Ehud; Katzir, Gadi

    2015-07-01

    Chameleons perform large-amplitude eye movements that are frequently referred to as independent, or disconjugate. When prey (an insect) is detected, the chameleon's eyes converge to view it binocularly and 'lock' in their sockets so that subsequent visual tracking is by head movements. However, the extent of the eyes' independence is unclear. For example, can a chameleon visually track two small targets simultaneously and monocularly, i.e. one with each eye? This is of special interest because eye movements in ectotherms and birds are frequently independent, with optic nerves that are fully decussated and intertectal connections that are not as developed as in mammals. Here, we demonstrate that chameleons presented with two small targets moving in opposite directions can perform simultaneous, smooth, monocular, visual tracking. To our knowledge, this is the first demonstration of such a capacity. The fine patterns of the eye movements in monocular tracking were composed of alternating, longer, 'smooth' phases and abrupt 'step' events, similar to smooth pursuits and saccades. Monocular tracking differed significantly from binocular tracking with respect to both 'smooth' phases and 'step' events. We suggest that in chameleons, eye movements are not simply 'independent'. Rather, at the gross level, eye movements are (i) disconjugate during scanning, (ii) conjugate during binocular tracking and (iii) disconjugate, but coordinated, during monocular tracking. At the fine level, eye movements are disconjugate in all cases. These results support the view that in vertebrates, basic monocular control is under a higher level of regulation that dictates the eyes' level of coordination according to context. © 2015. Published by The Company of Biologists Ltd.

  11. Induction of Monocular Stereopsis by Altering Focus Distance: A Test of Ames's Hypothesis.

    Science.gov (United States)

    Vishwanath, Dhanraj

    2016-03-01

    Viewing a real three-dimensional scene or a stereoscopic image with both eyes generates a vivid phenomenal impression of depth known as stereopsis. Numerous reports have highlighted the fact that an impression of stereopsis can be induced in the absence of binocular disparity. A method claimed by Ames (1925) involved altering accommodative (focus) distance while monocularly viewing a picture. This claim was tested on naïve observers using a method inspired by the observations of Gogel and Ogle on the equidistance tendency. Consistent with Ames's claim, most observers reported that the focus manipulation induced an impression of stereopsis comparable to that obtained by monocular-aperture viewing.

  12. Elimination of aniseikonia in monocular aphakia with a contact lens-spectacle combination.

    Science.gov (United States)

    Schechter, R J

    1978-01-01

    Correction of monocular aphakia with contact lenses generally results in aniseikonia in the range of 7--9%; with correction by intraocular lenses, aniseikonia is approximately 2%. We present a new method of correcting aniseikonia in monocular aphakics using a contact lens-spectacle combination. A formula is derived wherein the contact lens is deliberately overcorrected; this overcorrection is then neutralized by the appropriate spectacle lens, to be worn over the contact lens. Calculated results with this system over a wide range of possible situations consistently results in an aniseikonia of 0.1%.

  13. END-TO-END DEPTH FROM MOTION WITH STABILIZED MONOCULAR VIDEOS

    Directory of Open Access Journals (Sweden)

    C. Pinard

    2017-08-01

    Full Text Available We propose a depth map inference system from monocular videos based on a novel dataset for navigation that mimics aerial footage from gimbal stabilized monocular camera in rigid scenes. Unlike most navigation datasets, the lack of rotation implies an easier structure from motion problem which can be leveraged for different kinds of tasks such as depth inference and obstacle avoidance. We also propose an architecture for end-to-end depth inference with a fully convolutional network. Results show that although tied to camera inner parameters, the problem is locally solvable and leads to good quality depth prediction.

  14. Perceived suprathreshold depth under conditions that elevate the stereothreshold.

    Science.gov (United States)

    Bedell, Harold E; Gantz, Liat; Jackson, Danielle N

    2012-12-01

    Previous studies considered the possibility that individuals with impaired stereoacuity can be identified by estimating the perceived depth of a target with a suprathreshold retinal image disparity. These studies showed that perceived suprathreshold depth is reduced when the image presented to one eye is blurred, but they did not address whether a similar reduction of perceived depth occurs when the stereothreshold is elevated using other manipulations. Stereothresholds were measured in six adult observers for a pair of bright 1-degree vertical lines during normal viewing and under five conditions that elevated the stereothreshold: monocular dioptric blur, monocular glare, binocular luminance reduction, monocular luminance reduction, and imposed disjunctive image motion. The observers subsequently matched the perceived depth of degraded targets presented with crossed or uncrossed disparities corresponding to two, four, and six times the elevated stereothreshold for each stimulus condition. The image manipulations used elevated the stereothreshold by a factor of 3.7 to 5.5 times. For targets with suprathreshold disparities, monocular blur, monocular luminance reduction, and disjunctive image motion resulted in a significant decrease in perceived depth. However, the magnitude of perceived suprathreshold depth was unaffected when monocular glare was introduced or the binocular luminance of the stereotargets was reduced. Not all conditions that increase the stereothreshold reduce the perceived depth of targets with suprathreshold disparities. Observers who have poor stereopsis therefore may or may not exhibit an associated reduction of perceived suprathreshold depth.

  15. Stereopsis has the edge in 3-D displays

    Science.gov (United States)

    Piantanida, T. P.

    The results of studies conducted at SRI International to explore differences in image requirements for depth and form perception with 3-D displays are presented. Monocular and binocular stabilization of retinal images was used to separate form and depth perception and to eliminate the retinal disparity input to stereopsis. Results suggest that depth perception is dependent upon illumination edges in the retinal image that may be invisible to form perception, and that the perception of motion-in-depth may be inhibited by form perception, and may be influenced by subjective factors such as ocular dominance and learning.

  16. Enhanced perception of terrain hazards in off-road path choice: stereoscopic 3D versus 2D displays

    Science.gov (United States)

    Merritt, John O.; CuQlock-Knopp, V. Grayson; Myles, Kimberly

    1997-06-01

    Off-road mobility at night is a critical factor in modern military operations. Soldiers traversing off-road terrain, both on foot and in combat vehicles, often use 2D viewing devices (such as a driver's thermal viewer, or biocular or monocular night-vision goggles) for tactical mobility under low-light conditions. Perceptual errors can occur when 2D displays fail to convey adequately the contours of terrain. Some off-road driving accidents have been attributed to inadequate perception of terrain features due to using 2D displays (which do not provide binocular-parallax cues to depth perception). In this study, photographic images of terrain scenes were presented first in conventional 2D video, and then in stereoscopic 3D video. The percentage of possible correct answers for 2D and 3D were: 2D pretest equals 52%, 3D pretest equals 80%, 2D posttest equals 48%, 3D posttest equals 78%. Other recent studies conducted at the US Army Research Laboratory's Human Research and Engineering Directorate also show that stereoscopic 3D displays can significantly improve visual evaluation of terrain features, and thus may improve the safety and effectiveness of military off-road mobility operation, both on foot and in combat vehicles.

  17. 76 FR 17582 - Special Conditions: Bombardier Model BD-700-1A10 and BD-700-1A11 Airplanes, Head-Up Display (HUD...

    Science.gov (United States)

    2011-03-30

    ...-1A11 Airplanes, Head-Up Display (HUD) With Video Synthetic Vision System (SVS) AGENCY: Federal Aviation... Certificate Data Sheet (TCDS) T00003NY, those aircraft models are known under the marketing designation of...

  18. 76 FR 17062 - Special Conditions: Bombardier Model BD-700-1A10 and BD-700-1A11 Airplanes, Head-Up Display (HUD...

    Science.gov (United States)

    2011-03-28

    ...-1A11 Airplanes, Head-Up Display (HUD) With Video Synthetic Vision System (SVS) AGENCY: Federal Aviation... Certificate Data Sheet (TCDS) T00003NY, those aircraft models are known under the marketing designation of...

  19. 76 FR 31223 - Special Conditions: Bombardier Model BD-700-1A10 and BD-700-1A11 Airplanes, Head-up Display (HUD...

    Science.gov (United States)

    2011-05-31

    ...-1A11 Airplanes, Head-up Display (HUD) With Video Synthetic Vision System (SVS) AGENCY: Federal Aviation.... Per Type Certificate Data Sheet (TCDS) T00003NY, those aircraft models are known under the marketing...

  20. Monocular inhibition reveals temporal and spatial changes in gene expression in the primary visual cortex of marmoset.

    Directory of Open Access Journals (Sweden)

    Yuki eNakagami

    2013-04-01

    Full Text Available We investigated the time course of the expression of several activity-dependent genes evoked by visual inputs in the primary visual cortex (V1 in adult marmosets. In order to examine the rapid time course of activity-dependent gene expression, marmosets were first monocularly inactivated by tetrodotoxin (TTX, kept in darkness for two days, and then exposed to various length of light stimulation. Activity-dependent genes including HTR1B, HTR2A, whose activity-dependency were previously reported by us, and well-known immediate early genes (IEGs, c-FOS, ZIF268, and ARC, were examined by in situ hybridization. Using this system, first, we demonstrated the ocular dominance type of gene expression pattern in V1 under this condition. IEGs were expressed in columnar patterns throughout layers II-VI of all the tested monocular marmosets. Second, we showed the regulation of HTR1B and HTR2A expressions by retinal spontaneous activity, because HTR1B and HTR2A mRNA expressions sustained a certain level regardless of visual stimulation and were inhibited by a blockade of the retinal activity with TTX. Third, IEGs dynamically changed its laminar distribution from half an hour to several hours upon a stimulus onset with the unique time course for each gene. The expression patterns of these genes were different in neurons of each layer as well. These results suggest that the regulation of each neuron in the primary visual cortex of marmosets is subjected to different regulation upon the change of activities from retina. It should be related to a highly differentiated laminar structure of primate visual systems, reflecting the functions of the activity-dependent gene expression in marmoset V1.

  1. Evaluation of anti-glare applications for a tactical helmet-mounted display

    Science.gov (United States)

    Roll, Jason L.; Trew, Noel J. M.; Geis, Matthew R.; Havig, Paul R.

    2011-06-01

    Non see-through, monocular helmet mounted displays (HMDs) provide warfighters with unprecedented amounts of information at a glance. The US Air Force recognizes their usefulness, and has included such an HMD as part of a kit for ground-based, Battlefield Airmen. Despite their many advantages, non see-through HMDs occlude a large portion of the visual field when worn as designed, directly in front of the eye. To address this limitation, operators have chosen to wear it just above the cheek, angled up toward the eye. However, wearing the HMD in this position exposes the display to glare, causing a potential viewing problem. In order to address this problem, we tested several film and HMD hood applications for their effect on glare. The first experiment objectively examined the amount of light reflected off the display with each application in a controlled environment. The second experiment used human participants to subjectively evaluate display readability/legibility with each film and HMD hood covering under normal office lighting and under a simulated sunlight condition. In this test paradigm, participants had to correctly identify different icons on a map and different words on a white background. Our results indicate that though some applications do reduce glare, they do not significantly improve the HMD's readability/legibility compared with an uncovered screen. This suggests that these post-production modifications will not completely solve this problem and underscores the importance of employing a user-centered approach early in the design cycle to determine an operator's use-case before manufacturing an HMD for a particular user community.

  2. Perception of Acceleration in Motion-In-Depth With Only Monocular and Binocular Information

    Directory of Open Access Journals (Sweden)

    Santiago Estaún

    2003-01-01

    Full Text Available Percepción de la aceleración en el movimiento en profundidad con información monocular y con información monocular y binocular. En muchas ocasiones es necesario adecuar nuestras acciones a objetos que cambian su aceleración. Sin embargo, no se ha encontrado evidencia de una percepción directa de la aceleración. En su lugar, parece ser que somos capaces de detectar cambios de velocidad en el movimiento 2-D dentro de una ventana temporal. Además, resultados recientes sugieren que el movimiento en profundidad se detecta a través de cambios de posición. Por lo tanto, para detectar aceleración en profundidad sería necesario que el sistema visual lleve a cabo algun tipo de cómputo de segundo orden. En dos experimentos, mostramos que los observadores no perciben la aceleración en trayectorias de aproximación, al menos en los rangos que utilizados [600- 800 ms] dando como resultado una sobreestimación del tiempo de llegada. Independientemente de la condición de visibilidad (sólo monocular o monocular más binocular, la respuesta se ajusta a una estrategia de velocidad constante. No obstante, la sobreestimación se reduce cuando la información binocular está disponible.

  3. LASIK monocular en pacientes adultos con ambliopía por anisometropía

    Directory of Open Access Journals (Sweden)

    Alejandro Tamez-Peña

    2017-09-01

    Conclusiones: La cirugía refractiva monocular en pacientes con ambliopía por anisometropía es una opción terapéutica segura y efectiva que ofrece resultados visuales satisfactorios, preservando o incluso mejorando la AVMC preoperatoria.

  4. Depth scaling in phantom and monocular gap stereograms using absolute distance information.

    Science.gov (United States)

    Kuroki, Daiichiro; Nakamizo, Sachio

    2006-11-01

    The present study aimed to investigate whether the visual system scales apparent depth from binocularly unmatched features by using absolute distance information. In Experiment 1 we examined the effect of convergence on perceived depth in phantom stereograms [Gillam, B., & Nakayama, K. (1999). Quantitative depth for a phantom surface can be based on cyclopean occlusion cues alone. Vision Research, 39, 109-112.], monocular gap stereograms [Pianta, M. J., & Gillam, B. J. (2003a). Monocular gap stereopsis: manipulation of the outer edge disparity and the shape of the gap. Vision Research, 43, 1937-1950.] and random dot stereograms. In Experiments 2 and 3 we examined the effective range of viewing distances for scaling the apparent depths in these stereograms. The results showed that: (a) the magnitudes of perceived depths increased in all stereograms as the estimate of the viewing distance increased while keeping proximal and/or distal sizes of the stimuli constant, and (b) the effective range of viewing distances was significantly shorter in monocular gap stereograms. The first result indicates that the visual system scales apparent depth from unmatched features as well as that from horizontal disparity, while the second suggests that, at far distances, the strength of the depth signal from an unmatched feature in monocular gap stereograms is relatively weaker than that from horizontal disparity.

  5. Joint optic disc and cup boundary extraction from monocular fundus images.

    Science.gov (United States)

    Chakravarty, Arunava; Sivaswamy, Jayanthi

    2017-08-01

    Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Universal Numeric Segmented Display

    CERN Document Server

    Azad, Md Abul kalam; Kamruzzaman, S M

    2010-01-01

    Segmentation display plays a vital role to display numerals. But in today's world matrix display is also used in displaying numerals. Because numerals has lots of curve edges which is better supported by matrix display. But as matrix display is costly and complex to implement and also needs more memory, segment display is generally used to display numerals. But as there is yet no proposed compact display architecture to display multiple language numerals at a time, this paper proposes uniform display architecture to display multiple language digits and general mathematical expressions with higher accuracy and simplicity by using a 18-segment display, which is an improvement over the 16 segment display.

  7. Unique interactive projection display screen

    Energy Technology Data Exchange (ETDEWEB)

    Veligdan, J.T.

    1997-11-01

    Projection systems continue to be the best method to produce large (1 meter and larger) displays. However, in order to produce a large display, considerable volume is typically required. The Polyplanar Optic Display (POD) is a novel type of projection display screen, which for the first time, makes it possible to produce a large projection system that is self-contained and only inches thick. In addition, this display screen is matte black in appearance allowing it to be used in high ambient light conditions. This screen is also interactive and can be remotely controlled via an infrared optical pointer resulting in mouse-like control of the display. Furthermore, this display need not be flat since it can be made curved to wrap around a viewer as well as being flexible.

  8. Cataract surgery: emotional reactions of patients with monocular versus binocular vision Cirurgia de catarata: aspectos emocionais de pacientes com visão monocular versus binocular

    Directory of Open Access Journals (Sweden)

    Roberta Ferrari Marback

    2012-12-01

    Full Text Available PURPOSE: To analyze emotional reactions related to cataract surgery in two groups of patients (monocular vision - Group 1; binocular vision - Group 2. METHODS: A transversal comparative study was performed using a structured questionnaire from a previous exploratory study before cataract surgery. RESULTS: 206 patients were enrolled in the study, 96 individuals in Group 1 (69.3 ± 10.4 years and 110 in Group 2 (68.2 ± 10.2 years. Most patients in group 1 (40.6% and 22.7% of group 2, reported fear of surgery (pOBJETIVO: Verificar reações emocionais relacionadas à cirurgia de catarata entre pacientes com visão monocular (Grupo 1 e binocular (Grupo 2. MÉTODOS: Foi realizado um estudo tranversal, comparativo por meio de um questionário estruturado respondido por pacientes antes da cirurgia de catarata. RESULTADOS: A amostra foi composta de 96 pacientes no Grupo 1 (69.3 ± 10.4 anos e 110 no Grupo 2 (68.2 ± 10.2 anos. Consideravam apresentar medo da cirugia 40.6% do Grupo 1 e 22.7% do Grupo 2 (p<0.001 e entre as principais causas do medo, a possibilidade de perda da visão, complicações cirúrgicas e a morte durante o procedimento foram apontadas. Os sentimentos mais comuns entre os dois grupos foram dúvidas a cerca dos resultados da cirurgia e o nervosismo diante do procedimento. CONCLUSÃO: Pacientes com visão monocular apresentaram mais medo e dúvidas relacionadas à cirurgia de catarata comparados com aqueles com visão binocular. Portanto, é necessário que os médicos considerem estas reações emocionais e invistam mais tempo para esclarecer os riscos e benefícios da cirurgia de catarata.

  9. Cirurgia monocular para esotropias de grande ângulo: histórico e novos paradigmas Monocular surgery for large-angle esotropias: review and new paradigms

    Directory of Open Access Journals (Sweden)

    Edmilson Gigante

    2010-08-01

    Full Text Available As primitivas cirurgias de estrabismo, as miotomias e as tenotomias, eram feitas, simplesmente, seccionando-se o músculo ou o seu tendão, sem nenhuma sutura. Estas cirurgias eram feitas, geralmente, em um só olho, tanto em pequenos como em grandes desvios e os resultados eram pouco previsíveis. Jameson, em 1922, propôs uma nova técnica cirúrgica, usando suturas e fixando, na esclera, o músculo seccionado, tornando a cirurgia mais previsível. Para as esotropias, praticou recuos de, no máximo, 5 mm para o reto medial, o que se tornou uma regra para os demais cirurgiões que o sucederam, sendo impossível, a partir daí, a correção de esotropias de grande ângulo com cirurgia monocular. Rodriguez-Vásquez, em 1974, superou o parâmetro de 5 mm, propondo amplos recuos dos retos mediais (6 a 9 mm para o tratamento da síndrome de Ciancia, com bons resultados. Os autores revisaram a literatura, ano a ano, objetivando comparar os vários trabalhos e, com isso, concluíram que a cirurgia monocular de recuo-ressecção pode constituir uma opção viável para o tratamento cirúrgico das esotropias de grande ângulo.The primitive strabismus surgeries, myotomies and tenotomies, were performed simply by sectioning the muscle or its tendon without any suture. Such surgeries were usually performed in just one eye both in small and in large angles with not really predictable results. In 1922, Jameson introduced a new surgery technique using sutures and fixing the sectioned muscle to the sclera, increasing surgery predictability. For the esotropias he carried out no more than 5 mm recession of the medial rectus, which became a rule for the surgeons who followed him, which made it impossible from then on to correct largeangle esotropias with a monocular surgery. Rodriguez-Vásquez, in 1974, exceeded the 5 mm parameter by proposing large recessions of the medial recti (6 to 9 mm to treat the Ciancia syndrome with good results. The authors revised the

  10. Evaluación de la reproducibilidad de la retinoscopia dinámica monocular de Merchán

    Directory of Open Access Journals (Sweden)

    Lizbeth Acuña

    2010-08-01

    Full Text Available Objetivo: Evaluar la reproducibilidad de la retinoscopia dinámica monocular y su nivel de acuerdo con la retinoscopia estática binocular y monocular, retinoscopia de Nott y Método Estimado Monocular (MEM. Métodos: Se determinó la reproducibilidad entre los evaluadores y entre los métodos por medio del coeficiente de correlación intraclase (CCI y se establecieron los límites de acuerdo de Bland y Altman. Resultados: Se evaluaron 126 personas entre 5 y 39 años y se encontró una baja reproducibilidad interexaminador de la retinoscopia dinámica monocular en ambos ojos CCI ojo derecho: 0.49 (IC95% 0.36; 0.51; ojo izquierdo 0.51 (IC95% 0.38; 0.59. El límite de acuerdo entre evaluadores fue ±1.25 D. Al evaluar la reproducibilidad entre la retinoscopia dinámica monocular y la estática se observó que la mayor reproducibilidad se obtuvo con la estática binocular y monocular y, en visión próxima, entre el método estimado monocular y la retinoscopia de Nott. Conclusiones: La retinoscopia dinámica monocular no es una prueba reproducible y presenta diferencias clínicas significativas para determinar el estado refractivo, en cuanto a poder dióptrico y tipo de ametropía, por tanto, no se puede considerar dentro de la batería de exámenes aplicados para determinar diagnósticos y correcciones refractivas tanto en la visión lejana como en la visión próxima.

  11. Evaluación de la reproducibilidad de la retinoscopia dinámica monocular de Merchán

    Directory of Open Access Journals (Sweden)

    Lizbeth Acuña

    2009-12-01

    Full Text Available Objetivo: Evaluar la reproducibilidad de la retinoscopia dinámica monocular y su nivel de acuerdo con la retinoscopia estática binocular y monocular, retinoscopia de Nott y Método Estimado Monocular (MEM.Métodos: Se determinó la reproducibilidad entre los evaluadores y entre los métodos por medio del coeficiente de correlación intraclase (CCI y se establecieron los límites de acuerdo de Bland y Altman.Resultados: Se evaluaron 126 personas entre 5 y 39 años y se encontró una baja reproducibilidad interexaminador de la retinoscopia dinámica monocular en ambos ojos CCI ojo derecho: 0.49 (IC95% 0.36; 0.51; ojo izquierdo 0.51 (IC95% 0.38; 0.59. El límite de acuerdo entre evaluadores fue ±1.25 D. Al evaluar la reproducibilidad entre la retinoscopia dinámica monocular y la estática se observó que la mayor reproducibilidad se obtuvo con la estática binocular y monocular y, en visión próxima, entre el método estimado monocular y la retinoscopia de Nott.Conclusiones: La retinoscopia dinámica monocular no es una prueba reproducible y presenta diferencias clínicas significativas para determinar el estado refractivo, en cuanto a poder dióptrico y tipo de ametropía, por tanto, no se puede considerar dentro de la batería de exámenes aplicados para determinar diagnósticos y correcciones refractivas tanto en la visión lejana como en la visión próxima.

  12. The framing effect with rectangular and trapezoidal surfaces: actual and pictorial surface slant, frame orientation, and viewing condition.

    Science.gov (United States)

    Reinhardt-Rutland, A H

    1999-01-01

    The perceived slant of a surface relative to the frontal plane can be reduced when the surface is viewed through a frame between the observer and the surface. Aspects of this framing effect were investigated in three experiments in which observers judged the orientations-in-depth of rectangular and trapezoidal surfaces which were matched for pictorial depth. In experiments 1 and 2, viewing was stationary-monocular. In experiment 1, a frontal rectangular frame was present or absent during viewing. The perceived slants of the surfaces were reduced in the presence of the frame; the reduction for the trapezoidal surface was greater, suggesting that conflict in stimulus information contributes to the phenomenon. In experiment 2, the rectangular frame was either frontal or slanted; in a third condition, a frame was trapezoidal and frontal. The conditions all elicited similar results, suggesting that the framing effect is not explained by pictorial perception of the display, or by assimilation of the surface orientation to the frame orientation. In experiment 3, viewing was moving-monocular to introduce motion parallax; the framing effect was reduced, being appreciable only for a trapezoidal surface. The results are related to other phenomena in which depth perception of points in space tends towards a frontal plane; this frontal-plane tendency is attributed to heavy experimental demands, mainly concerning impoverished, conflicting, and distracting information.

  13. Flexible Bistable Cholesteric Reflective Displays

    Science.gov (United States)

    Yang, Deng-Ke

    2006-03-01

    Cholesteric liquid crystals (ChLCs) exhibit two stable states at zero field condition-the reflecting planar state and the nonreflecting focal conic state. ChLCs are an excellent candidate for inexpensive and rugged electronic books and papers. This paper will review the display cell structure,materials and drive schemes for flexible bistable cholesteric (Ch) reflective displays.

  14. Induction of Monocular Stereopsis by Altering Focus Distance: A Test of Ames’s Hypothesis

    Directory of Open Access Journals (Sweden)

    Dhanraj Vishwanath

    2016-04-01

    Full Text Available Viewing a real three-dimensional scene or a stereoscopic image with both eyes generates a vivid phenomenal impression of depth known as stereopsis. Numerous reports have highlighted the fact that an impression of stereopsis can be induced in the absence of binocular disparity. A method claimed by Ames (1925 involved altering accommodative (focus distance while monocularly viewing a picture. This claim was tested on naïve observers using a method inspired by the observations of Gogel and Ogle on the equidistance tendency. Consistent with Ames’s claim, most observers reported that the focus manipulation induced an impression of stereopsis comparable to that obtained by monocular-aperture viewing.

  15. Embolic and nonembolic transient monocular visual field loss: a clinicopathologic review.

    Science.gov (United States)

    Petzold, Axel; Islam, Niaz; Hu, Han-Hwa; Plant, Gordon T

    2013-01-01

    Transient monocular blindness and amaurosis fugax are umbrella terms describing a range of patterns of transient monocular visual field loss (TMVL). The incidence rises from ≈1.5/100,000 in the third decade of life to ≈32/100,000 in the seventh decade of life. We review the vascular supply of the retina that provides an anatomical basis for the types of TMVL and discuss the importance of collaterals between the external and internal carotid artery territories and related blood flow phenomena. Next, we address the semiology of TMVL, focusing on onset, pattern, trigger factors, duration, recovery, frequency-associated features such as headaches, and on tests that help with the important differential between embolic and non-embolic etiologies.

  16. A monocular vision system based on cooperative targets detection for aircraft pose measurement

    Science.gov (United States)

    Wang, Zhenyu; Wang, Yanyun; Cheng, Wei; Chen, Tao; Zhou, Hui

    2017-08-01

    In this paper, a monocular vision measurement system based on cooperative targets detection is proposed, which can capture the three-dimensional information of objects by recognizing the checkerboard target and calculating of the feature points. The aircraft pose measurement is an important problem for aircraft’s monitoring and control. Monocular vision system has a good performance in the range of meter. This paper proposes an algorithm based on coplanar rectangular feature to determine the unique solution of distance and angle. A continuous frame detection method is presented to solve the problem of corners’ transition caused by symmetry of the targets. Besides, a displacement table test system based on three-dimensional precision and measurement system human-computer interaction software has been built. Experiment result shows that it has a precision of 2mm in the range of 300mm to 1000mm, which can meet the requirement of the position measurement in the aircraft cabin.

  17. Monocular trajectory intersection method for 3D motion measurement of a point target

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This article proposes a monocular trajectory intersection method,a videometrics measurement with a mature theoretical system to solve the 3D motion parameters of a point target.It determines the target’s motion parameters including its 3D trajectory and velocity by intersecting the parametric trajectory of a motion target and series of sight-rays by which a motion camera observes the target,in contrast with the regular intersection method for 3D measurement by which the sight-rays intersect at one point.The method offers an approach to overcome the technical failure of traditional monocular measurements for the 3D motion of a point target and thus extends the application fields of photogrammetry and computer vision.Wide application is expected in passive observations of motion targets on various mobile beds.

  18. A Novel Ship-Bridge Collision Avoidance System Based on Monocular Computer Vision

    Directory of Open Access Journals (Sweden)

    Yuanzhou Zheng

    2013-06-01

    Full Text Available The study aims to investigate the ship-bridge collision avoidance. A novel system for ship-bridge collision avoidance based on monocular computer vision is proposed in this study. In the new system, the moving ships are firstly captured by the video sequences. Then the detection and tracking of the moving objects have been done to identify the regions in the scene that correspond to the video sequences. Secondly, the quantity description of the dynamic states of the moving objects in the geographical coordinate system, including the location, velocity, orientation, etc, has been calculated based on the monocular vision geometry. Finally, the collision risk is evaluated and consequently the ship manipulation commands are suggested, aiming to avoid the potential collision. Both computer simulation and field experiments have been implemented to validate the proposed system. The analysis results have shown the effectiveness of the proposed system.

  19. Depth measurement using monocular stereo vision system: aspect of spatial discretization

    Science.gov (United States)

    Xu, Zheng; Li, Chengjin; Zhao, Xunjie; Chen, Jiabo

    2010-11-01

    The monocular stereo vision system, consisting of single camera with controllable focal length, can be used in 3D reconstruction. Applying the system for 3D reconstruction, must consider effects caused by digital camera. There are two possible methods to make the monocular stereo vision system. First one the distance between the target object and the camera image plane is constant and lens moves. The second method assumes that the lens position is constant and the image plane moves in respect to the target. In this paper mathematical modeling of two approaches is presented. We focus on iso-disparity surfaces to define the discretization effect on the reconstructed space. These models are implemented and simulated on Matlab. The analysis is used to define application constrains and limitations of these methods. The results can be also used to enhance the accuracy of depth measurement.

  20. Monocular trajectory intersection method for 3D motion measurement of a point target

    Institute of Scientific and Technical Information of China (English)

    YU QiFeng; SHANG Yang; ZHOU Jian; ZHANG XiaoHu; LI LiChun

    2009-01-01

    This article proposes a monocular trajectory intersection method,a videometrics measurement with a mature theoretical system to solve the 3D motion parameters of a point target.It determines the target's motion parameters including its 3D trajectory and velocity by intersecting the parametric trajectory of a motion target and series of sight-rays by which a motion camera observes the target,in contrast with the regular intersection method for 3D measurement by which the sight-rays intersect at one point.The method offers an approach to overcome the technical failure of traditional monocular measurements for the 3D motion of a point target and thus extends the application fields of photogrammetry and computer vision.Wide application is expected in passive observations of motion targets on various mobile beds.

  1. Large-scale monocular FastSLAM2.0 acceleration on an embedded heterogeneous architecture

    Science.gov (United States)

    Abouzahir, Mohamed; Elouardi, Abdelhafid; Bouaziz, Samir; Latif, Rachid; Tajer, Abdelouahed

    2016-12-01

    Simultaneous localization and mapping (SLAM) is widely used in many robotic applications and autonomous navigation. This paper presents a study of FastSLAM2.0 computational complexity based on a monocular vision system. The algorithm is intended to operate with many particles in a large-scale environment. FastSLAM2.0 was partitioned into functional blocks allowing a hardware software matching on a CPU-GPGPU-based SoC architecture. Performances in terms of processing time and localization accuracy were evaluated using a real indoor dataset. Results demonstrate that an optimized and efficient CPU-GPGPU partitioning allows performing accurate localization results and high-speed execution of a monocular FastSLAM2.0-based embedded system operating under real-time constraints.

  2. A Case of Recurrent Transient Monocular Visual Loss after Receiving Sildenafil

    Directory of Open Access Journals (Sweden)

    Asaad Ghanem Ghanem

    2011-01-01

    Full Text Available A 53-year-old man was attended to the Clinic Ophthalmic Center, Mansoura University, Egypt, with recurrent transient monocular visual loss after receiving sildenafil citrate (Viagra for erectile dysfunction. Examination for possible risk factors revealed mild hypercholesterolemia. Family history showed that his father had suffered from bilateral nonarteritic anterior ischemic optic neuropathy (NAION. Physicians might look for arteriosclerotic risk factors and family history of NAION among predisposing risk factors before prescribing sildenafil erectile dysfunction drugs.

  3. Benign pituitary adenoma associated with hyperostosis of the spenoid bone and monocular blindness. Case report.

    Science.gov (United States)

    Milas, R W; Sugar, O; Dobben, G

    1977-01-01

    The authors describe a case of benign chromophobe adenoma associated with hyperostosis of the lesser wing of the sphenoid bone and monocular blindness in a 38-year-old woman. The endocrinological and radiological evaluations were all suggestive of a meningioma. The diagnosis was established by biopsy of the tumor mass. After orbital decompression and removal of the tumor, the patient was treated with radiation therapy. Her postoperative course was uneventful, and her visual defects remained fixed.

  4. Augmented reality three-dimensional display with light field fusion.

    Science.gov (United States)

    Xie, Songlin; Wang, Peng; Sang, Xinzhu; Li, Chengyu

    2016-05-30

    A video see-through augmented reality three-dimensional display method is presented. The system that is used for dense viewpoint augmented reality presentation fuses the light fields of the real scene and the virtual model naturally. Inherently benefiting from the rich information of the light field, depth sense and occlusion can be handled under no priori depth information of the real scene. A series of processes are proposed to optimize the augmented reality performance. Experimental results show that the reconstructed fused 3D light field on the autostereoscopic display is well presented. The virtual model is naturally integrated into the real scene with a consistence between binocular parallax and monocular depth cues.

  5. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    Science.gov (United States)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  6. Measuring method for the object pose based on monocular vision technology

    Science.gov (United States)

    Sun, Changku; Zhang, Zimiao; Wang, Peng

    2010-11-01

    Position and orientation estimation of the object, which can be widely applied in the fields as robot navigation, surgery, electro-optic aiming system, etc, has an important value. The monocular vision positioning algorithm which is based on the point characteristics is studied and new measurement method is proposed in this paper. First, calculate the approximate coordinates of the five reference points which can be used as the initial value of iteration in the camera coordinate system according to weakp3p; Second, get the exact coordinates of the reference points in the camera coordinate system through iterative calculation with the constraints relationship of the reference points; Finally, get the position and orientation of the object. So the measurement model of monocular vision is constructed. In order to verify the accuracy of measurement model, a plane target using infrared LED as reference points is designed to finish the verification of the measurement method and the corresponding image processing algorithm is studied. And then The monocular vision experimental system is established. Experimental results show that the translational positioning accuracy reaches +/-0.05mm and rotary positioning accuracy reaches +/-0.2o .

  7. Monocular deprivation of Fourier phase information boosts the deprived eye's dominance during interocular competition but not interocular phase combination.

    Science.gov (United States)

    Bai, Jianying; Dong, Xue; He, Sheng; Bao, Min

    2017-06-03

    Ocular dominance has been extensively studied, often with the goal to understand neuroplasticity, which is a key characteristic within the critical period. Recent work on monocular deprivation, however, demonstrates residual neuroplasticity in the adult visual cortex. After deprivation of patterned inputs by monocular patching, the patched eye becomes more dominant. Since patching blocks both the Fourier amplitude and phase information of the input image, it remains unclear whether deprivation of the Fourier phase information alone is able to reshape eye dominance. Here, for the first time, we show that removing of the phase regularity without changing the amplitude spectra of the input image induced a shift of eye dominance toward the deprived eye, but only if the eye dominance was measured with a binocular rivalry task rather than an interocular phase combination task. These different results indicate that the two measurements are supported by different mechanisms. Phase integration requires the fusion of monocular images. The fused percept highly relies on the weights of the phase-sensitive monocular neurons that respond to the two monocular images. However, binocular rivalry reflects the result of direct interocular competition that strongly weights the contour information transmitted along each monocular pathway. Monocular phase deprivation may not change the weights in the integration (fusion) mechanism much, but alters the balance in the rivalry (competition) mechanism. Our work suggests that ocular dominance plasticity may occur at different stages of visual processing, and that homeostatic compensation also occurs for the lack of phase regularity in natural scenes. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. Depth reversals in stereoscopic displays driven by apparent size

    Science.gov (United States)

    Sacher, Gunnar; Hayes, Amy; Thornton, Ian M.; Sereno, Margaret E.; Malony, Allen D.

    1998-04-01

    In visual scenes, depth information is derived from a variety of monocular and binocular cues. When in conflict, a monocular cue is sometimes able to override the binocular information. We examined the accuracy of relative depth judgments in orthographic, stereoscopic displays and found that perceived relative size can override binocular disparity as a depth cue in a situation where the relative size information is itself generated from disparity information, not from retinal size difference. A size discrimination task confirmed the assumption that disparity information was perceived and used to generate apparent size differences. The tendency for the apparent size cue to override disparity information can be modulated by varying the strength of the apparent size cue. In addition, an analysis of reaction times provides supporting evidence for this novel depth reversal effect. We believe that human perception must be regarded as an important component of stereoscopic applications. Hence, if applications are to be effective and accurate, it is necessary to take into account the richness and complexity of the human visual perceptual system that interacts with them. We discuss implications of this and similar research for human performance in virtual environments, the design of visual presentations for virtual worlds, and the design of visualization tools.

  9. Colorimetric evaluation of display performance

    Science.gov (United States)

    Kosmowski, Bogdan B.

    2001-08-01

    The development of information techniques, using new technologies, physical phenomena and coding schemes, enables new application areas to be benefited form the introduction of displays. The full utilization of the visual perception of a human operator, requires the color coding process to be implemented. The evolution of displays, from achromatic (B&W) and monochromatic, to multicolor and full-color, enhances the possibilities of information coding, creating however a need for the quantitative methods of display parameter assessment. Quantitative assessment of color displays, restricted to photometric measurements of their parameters, is an estimate leading to considerable errors. Therefore, the measurements of a display's color properties have to be based on spectral measurements of the display and its elements. The quantitative assessment of the display system parameters should be made using colorimetric systems like CIE1931, CIE1976 LAB or LUV. In the paper, the constraints on the measurement method selection for the color display evaluation are discussed and the relations between their qualitative assessment and the ergonomic conditions of their application are also presented. The paper presents the examples of using LUV colorimetric system and color difference (Delta) E in the optimization of color liquid crystal displays.

  10. A 3D Human Skeletonization Algorithm for a Single Monocular Camera Based on Spatial–Temporal Discrete Shadow Integration

    Directory of Open Access Journals (Sweden)

    Jie Hou

    2017-07-01

    Full Text Available Three-dimensional (3D human skeleton extraction is a powerful tool for activity acquirement and analyses, spawning a variety of applications on somatosensory control, virtual reality and many prospering fields. However, the 3D human skeletonization relies heavily on RGB-Depth (RGB-D cameras, expensive wearable sensors and specific lightening conditions, resulting in great limitation of its outdoor applications. This paper presents a novel 3D human skeleton extraction method designed for the monocular camera large scale outdoor scenarios. The proposed algorithm aggregates spatial–temporal discrete joint positions extracted from human shadow on the ground. Firstly, the projected silhouette information is recovered from human shadow on the ground for each frame, followed by the extraction of two-dimensional (2D joint projected positions. Then extracted 2D joint positions are categorized into different sets according to activity silhouette categories. Finally, spatial–temporal integration of same-category 2D joint positions is carried out to generate 3D human skeletons. The proposed method proves accurate and efficient in outdoor human skeletonization application based on several comparisons with the traditional RGB-D method. Finally, the application of the proposed method to RGB-D skeletonization enhancement is discussed.

  11. The performances of a super-multiview simulator and the presence of monocular depth sense

    Science.gov (United States)

    Lee, Beom-Ryeol; Park, Jung-Chul; Jeong, Ilkon; Son, Jung-young

    2015-05-01

    A simulator which can test a supermultiview condition is introduced. It allows to view two adjacent view images for each eye simultaneously and display patched images appearing at the viewing zone of a contact-type multiview 3-D display. The accommodation and vergence test with an accommodometer reveals that viewers can verge and accommodate even to the image at 600 mm and 2.7 m from them when a display screen/panel is located at 1.58 m from them. The verging and accommodating distance range is much more than the range 1.3 m ~ 1.9 m determined by the depth of field of the viewers. Furthermore, the patched images also provide a good depth sense which can be better than that from individual view images.

  12. Cirurgia monocular para esotropias de grande ângulo: um novo paradigma Monocular surgery for large-angle esotropias: a new paradigm

    Directory of Open Access Journals (Sweden)

    Edmilson Gigante

    2009-02-01

    Full Text Available OBJETIVO: Demonstrar a viabilidade da cirurgia monocular no tratamento das esotropias de grande ângulo, praticando-se amplos recuos do reto medial (6 a 10 mm e grandes ressecções do reto lateral (8 a 10 mm. MÉTODOS: Foram operados, com anestesia geral e sem reajustes per ou pósoperatórios, 46 pacientes com esotropias de 50δ ou mais, relativamente comitantes. Os métodos utilizados para refratometria, medida da acuidade visual e do ângulo de desvio, foram os, tradicionalmente, utilizados em estrabologia. No pós-operatório, além das medidas na posição primária do olhar, foi feita uma avaliação da motilidade do olho operado, em adução e em abdução. RESULTADOS: Foram considerados quatro grupos de estudo, correspondendo a quatro períodos de tempo: uma semana, seis meses, dois anos e quatro a sete anos. Os resultados para o ângulo de desvio pós-cirúrgico foram compatíveis com os da literatura em geral e mantiveram-se estáveis ao longo do tempo. A motilidade do olho operado apresentou pequena limitação em adução e nenhuma em abdução, contrariando o encontrado na literatura estrabológica. Comparando os resultados de adultos com os de crianças e de amblíopes com não amblíopes, não foram encontradas diferenças estatisticamente significativas entre eles. CONCLUSÃO:Em face dos resultados encontrados, entende-se ser possível afirmar que a cirurgia monocular de recuo-ressecção pode ser considerada opção viável para o tratamento das esotropias de grande ângulo, tanto para adultos quanto para crianças, bem como para amblíopes e não amblíopes.PURPOSE: To demonstrate the feasibility of monocular surgery in the treatment of large-angle esotropias through large recessions of the medial rectus (6 to 10 mm and large resections of the lateral rectus (8 to 10 mm. METHODS: 46 patients were submitted to surgery. They had esotropias of 50Δor more that were relatively comitant. The patients were operated under general

  13. Displaying gray shades in liquid crystal displays

    Indian Academy of Sciences (India)

    T N Ruckmongathan

    2003-08-01

    Quality of image in a display depends on the contrast, colour, resolution and the number of gray shades. A large number of gray shades is necessary to display images without any contour lines. These contours are due to limited number of gray shades in the display causing abrupt changes in grayness of the image, while the original image has a gradual change in brightness. Amplitude modulation has the capability to display a large number of gray shades with minimum number of time intervals [1,2]. This paper will cover the underlying principle of amplitude modulation, some variants and its extension to multi-line addressing. Other techniques for displaying gray shades in passive matrix displays are reviewed for the sake of comparison.

  14. More clinical observations on migraine associated with monocular visual symptoms in an Indian population

    Directory of Open Access Journals (Sweden)

    Vishal Jogi

    2016-01-01

    Full Text Available Context: Retinal migraine (RM is considered as one of the rare causes of transient monocular visual loss (TMVL and has not been studied in Indian population. Objectives: The study aims to analyze the clinical and investigational profile of patients with RM. Materials and Methods: This is an observational prospective analysis of 12 cases of TMVL fulfilling the International Classification of Headache Disorders-2nd edition (ICHD-II criteria of RM examined in Neurology and Ophthalmology Outpatient Department (OPD of Postgraduate Institute of Medical Education and Research (PGIMER, Chandigarh from July 2011 to October 2012. Results: Most patients presented in 3 rd and 4 th decade with equal sex distribution. Seventy-five percent had antecedent migraine without aura (MoA and 25% had migraine with Aura (MA. Headache was ipsilateral to visual symptoms in 67% and bilateral in 33%. TMVL preceded headache onset in 58% and occurred during headache episode in 42%. Visual symptoms were predominantly negative except in one patient who had positive followed by negative symptoms. Duration of visual symptoms was variable ranging from 30 s to 45 min. None of the patient had permanent monocular vision loss. Three patients had episodes of TMVL without headache in addition to the symptom constellation defining RM. Most of the tests done to rule out alternative causes were normal. Magnetic resonance imaging (MRI brain showed nonspecific white matter changes in one patient. Visual-evoked potential (VEP showed prolonged P100 latencies in two cases. Patent foramen ovale was detected in one patient. Conclusions: RM is a definite subtype of migraine and should remain in the ICHD classification. It should be kept as one of the differential diagnosis of transient monocular vision loss. We propose existence of "acephalgic RM" which may respond to migraine prophylaxis.

  15. Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.

    Science.gov (United States)

    Fischmeister, Florian Ph S; Bauer, Herbert

    2006-10-01

    Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.

  16. Three dimensional monocular human motion analysis in end-effector space

    DEFF Research Database (Denmark)

    Hauberg, Søren; Lapuyade, Jerome; Engell-Nørregård, Morten Pol

    2009-01-01

    In this paper, we present a novel approach to three dimensional human motion estimation from monocular video data. We employ a particle filter to perform the motion estimation. The novelty of the method lies in the choice of state space for the particle filter. Using a non-linear inverse kinemati...... solver allows us to perform the filtering in end-effector space. This effectively reduces the dimensionality of the state space while still allowing for the estimation of a large set of motions. Preliminary experiments with the strategy show good results compared to a full-pose tracker....

  17. Effect of ophthalmic filter thickness on predicted monocular dichromatic luminance and chromaticity discrimination.

    Science.gov (United States)

    Richer, S P; Little, A C; Adams, A J

    1984-11-01

    The majority of ophthalmic filters, whether they be in the form of spectacles or contact lenses, are absorbance type filters. Although color vision researchers routinely provide spectrophotometric transmission profiles of filters, filter thickness is rarely specified. In this paper, colorimetric tools and volume color theory are used to show that the color of a filter as well as its physical properties are altered dramatically by changes in thickness. The effect of changes in X-Chrom filter thickness on predicted monocular dichromatic luminance and chromaticity discrimination is presented.

  18. Monocular feature tracker for low-cost stereo vision control of an autonomous guided vehicle (AGV)

    Science.gov (United States)

    Pearson, Chris M.; Probert, Penelope J.

    1994-02-01

    We describe a monocular feature tracker (MFT), the first stage of a low cost stereoscopic vision system for use on an autonomous guided vehicle (AGV) in an indoor environment. The system does not require artificial markings or other beacons, but relies upon accurate knowledge of the AGV motion. Linear array cameras (LAC) are used to reduce the data and processing bandwidths. The limited information given by LAC require modelling of the expected features. We model an obstacle as a vertical line segment touching the floor, and can distinguish between these obstacles and most other clutter in an image sequence. Detection of these obstacles is sufficient information for local AGV navigation.

  19. Multispectral polarization viewing angle analysis of circular polarized stereoscopic 3D displays

    Science.gov (United States)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2010-02-01

    In this paper we propose a method to characterize polarization based stereoscopic 3D displays using multispectral Fourier optics viewing angle measurements. Full polarization analysis of the light emitted by the display in the full viewing cone is made at 31 wavelengths in the visible range. Vertical modulation of the polarization state is observed and explained by the position of the phase shift filter into the display structure. In addition, strong spectral dependence of the ellipticity and polarization degree is observed. These features come from the strong spectral dependence of the phase shift film and introduce some imperfections (color shifts and reduced contrast). Using the measured transmission properties of the two glasses filters, the resulting luminance across each filter is computed for left and right eye views. Monocular contrast for each eye and binocular contrasts are performed in the observer space, and Qualified Monocular and Binocular Viewing Spaces (QMVS and QBVS) can be deduced in the same way as auto-stereoscopic 3D displays allowing direct comparison of the performances.

  20. Invisible Display in Aluminum

    DEFF Research Database (Denmark)

    Prichystal, Jan Phuklin; Hansen, Hans Nørgaard; Bladt, Henrik Henriksen

    2005-01-01

    for an integrated display in a metal surface is often ruled by design and functionality of a product. The integration of displays in metal surfaces requires metal removal in order to clear the area of the display to some extent. The idea behind an invisible display in Aluminum concerns the processing of a metal...

  1. A model of neural mechanisms in monocular transparent motion perception.

    Science.gov (United States)

    Raudies, Florian; Neumann, Heiko

    2010-01-01

    Transparent motion is perceived when multiple motions are presented in the same part of visual space that move in different directions or with different speeds. Several psychophysical as well as physiological experiments have studied the conditions under which motion transparency occurs. Few computational mechanisms have been proposed that allow to segregate multiple motions. We present a novel neural model which investigates the necessary mechanisms underlying initial motion detection, the required representations for velocity coding, and the integration and segregation of motion stimuli to account for the perception of transparent motion. The model extends a previously developed architecture for neural computations along the dorsal pathway, particularly, in cortical areas V1, MT, and MSTd. It emphasizes the role of feedforward cascade processing and feedback from higher to earlier processing stages for selective feature enhancement and tuning. Our results demonstrate that the model reproduces several key psychophysical findings in perceptual motion transparency using random dot stimuli. Moreover, the model is able to process transparent motion as well as opaque surface motion in real-world sequences of 3-d scenes. As a main thesis, we argue that the perception of transparent motion relies on the representation of multiple velocities at one spatial location; however, this feature is necessary but not sufficient to perceive transparency. It is suggested that the activations simultaneously representing multiple activities are subsequently integrated by separate mechanisms leading to the segregation of different overlapping segments.

  2. Liquid crystal displays for aircraft engineering

    Directory of Open Access Journals (Sweden)

    Kovalenko L. F.

    2009-06-01

    Full Text Available Operating conditions for liquid-crystal displays of aircraft instruments have been examined. Requirements to engineering of a liquid-crystal display for operation in severe environment have been formulated. The implementation options for liquid-crystal matrix illumination have been analyzed in order to ensure the sufficient brightness depending on external illumination of a display screen.

  3. Microfluidics for electronic paper-like displays

    NARCIS (Netherlands)

    Shui, Lingling; Hayes, Robert A.; Jin, Mingliang; Zhang, X.; Zhang, Xiao; Bai, Pengfei; van den Berg, Albert; Zhou, Guofu

    2014-01-01

    Displays are ubiquitous in modern life, and there is a growing need to develop active, full color, video-rate reflective displays that perform well in high-light conditions. The core of display technology is to generate or manipulate light in the visible wavelength. Colored fluids or fluids with

  4. conditions

    Directory of Open Access Journals (Sweden)

    M. Venkatesulu

    1996-01-01

    Full Text Available Solutions of initial value problems associated with a pair of ordinary differential systems (L1,L2 defined on two adjacent intervals I1 and I2 and satisfying certain interface-spatial conditions at the common end (interface point are studied.

  5. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Yanhua Jiang

    2014-09-01

    Full Text Available This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.

  6. Cortical dynamics of three-dimensional form, color, and brightness perception. 1. Monocular theory

    Energy Technology Data Exchange (ETDEWEB)

    Grossberg, S.

    1987-01-01

    A real-time visual-processing theory is developed to explain how three-dimensional form, color, and brightness percepts are coherently synthesized. The theory describes how several fundamental uncertainty principles that limit the computation of visual information at individual processing stages are resolved through parallel and hierarchical interactions among several processing stages. The theory provides unified analysis and many predictions of data about stereopsis, binocular rivalry, hyperacuity, McCollough effect, textural grouping, border distinctness, surface perception, monocular and binocular brightness percepts, filling-in, metacontrast, transparency, figural aftereffects, lateral inhibition within spatial frequency channels, proximity luminance covariance, tissue contrast, motion segmentation, and illusory figures, as well as about reciprocal interactions among the hypercolumns, blobs, and stripes of cortical areas V1, V2, and V4. Monocular and binocular interactions between a Boundary Contour (BC) System and a Feature Contour (FC) System are developed. The BC System, defined by a hierarchy of oriented interactions, synthesizes an emergent and coherent binocular boundary segmentation from combinations of unoriented and oriented scenic elements.

  7. Monocular 3D Reconstruction and Augmentation of Elastic Surfaces with Self-Occlusion Handling.

    Science.gov (United States)

    Haouchine, Nazim; Dequidt, Jeremie; Berger, Marie-Odile; Cotin, Stephane

    2015-12-01

    This paper focuses on the 3D shape recovery and augmented reality on elastic objects with self-occlusions handling, using only single view images. Shape recovery from a monocular video sequence is an underconstrained problem and many approaches have been proposed to enforce constraints and resolve the ambiguities. State-of-the art solutions enforce smoothness or geometric constraints, consider specific deformation properties such as inextensibility or resort to shading constraints. However, few of them can handle properly large elastic deformations. We propose in this paper a real-time method that uses a mechanical model and able to handle highly elastic objects. The problem is formulated as an energy minimization problem accounting for a non-linear elastic model constrained by external image points acquired from a monocular camera. This method prevents us from formulating restrictive assumptions and specific constraint terms in the minimization. In addition, we propose to handle self-occluded regions thanks to the ability of mechanical models to provide appropriate predictions of the shape. Our method is compared to existing techniques with experiments conducted on computer-generated and real data that show the effectiveness of recovering and augmenting 3D elastic objects. Additionally, experiments in the context of minimally invasive liver surgery are also provided and results on deformations with the presence of self-occlusions are exposed.

  8. Mobile Target Tracking Based on Hybrid Open-Loop Monocular Vision Motion Control Strategy

    Directory of Open Access Journals (Sweden)

    Cao Yuan

    2015-01-01

    Full Text Available This paper proposes a new real-time target tracking method based on the open-loop monocular vision motion control. It uses the particle filter technique to predict the moving target’s position in an image. Due to the properties of the particle filter, the method can effectively master the motion behaviors of the linear and nonlinear. In addition, the method uses the simple mathematical operation to transfer the image information in the mobile target to its real coordinate information. Therefore, it requires few operating resources. Moreover, the method adopts the monocular vision approach, which is a single camera, to achieve its objective by using few hardware resources. Firstly, the method evaluates the next time’s position and size of the target in an image. Later, the real position of the objective corresponding to the obtained information is predicted. At last, the mobile robot should be controlled in the center of the camera’s vision. The paper conducts the tracking test to the L-type and the S-type and compares with the Kalman filtering method. The experimental results show that the method achieves a better tracking effect in the L-shape experiment, and its effect is superior to the Kalman filter technique in the L-type or S-type tracking experiment.

  9. Cataract surgery: emotional reactions of patients with monocular versus binocular vision

    Directory of Open Access Journals (Sweden)

    Roberta Ferrari Marback

    2012-12-01

    Full Text Available PURPOSE: To analyze emotional reactions related to cataract surgery in two groups of patients (monocular vision - Group 1; binocular vision - Group 2. METHODS: A transversal comparative study was performed using a structured questionnaire from a previous exploratory study before cataract surgery. RESULTS: 206 patients were enrolled in the study, 96 individuals in Group 1 (69.3 ± 10.4 years and 110 in Group 2 (68.2 ± 10.2 years. Most patients in group 1 (40.6% and 22.7% of group 2, reported fear of surgery (p<0.001. The most important causes of fear were: possibility of blindness, ocular complications and death during surgery. The most prevalent feelings among the groups were doubts about good results and nervousness. CONCLUSION: Patients with monocular vision reported more fear and doubts related to surgical outcomes. Thus, it is necessary that phisycians considers such emotional reactions and invest more time than usual explaining the risks and the benefits of cataract surgery.Ouvir

  10. Differential effects of head-mounted displays on visual performance.

    Science.gov (United States)

    Schega, Lutz; Hamacher, Daniel; Erfuth, Sandra; Behrens-Baumann, Wolfgang; Reupsch, Juliane; Hoffmann, Michael B

    2014-01-01

    Head-mounted displays (HMDs) virtually augment the visual world to aid visual task completion. Three types of HMDs were compared [look around (LA); optical see-through with organic light emitting diodes and virtual retinal display] to determine whether LA, leaving the observer functionally monocular, is inferior. Response times and error rates were determined for a combined visual search and Go-NoGo task. The costs of switching between displays were assessed separately. Finally, HMD effects on basic visual functions were quantified. Effects of HMDs on visual search and Go-NoGo task were small, but for LA display-switching costs for the Go-NoGo-task the effects were pronounced. Basic visual functions were most affected for LA (reduced visual acuity and visual field sensitivity, inaccurate vergence movements and absent stereo-vision). LA involved comparatively high switching costs for the Go-NoGo task, which might indicate reduced processing of external control cues. Reduced basic visual functions are a likely cause of this effect.

  11. Handbook of display technology

    CERN Document Server

    Castellano, Joseph A

    1992-01-01

    This book presents a comprehensive review of technical and commercial aspects of display technology. It provides design engineers with the information needed to select proper technology for new products. The book focuses on flat, thin displays such as light-emitting diodes, plasma display panels, and liquid crystal displays, but it also includes material on cathode ray tubes. Displays include a large number of products from televisions, auto dashboards, radios, and household appliances, to gasoline pumps, heart monitors, microwave ovens, and more.For more information on display tech

  12. Intermittent exotropia: comparative surgical results of lateral recti-recession and monocular recess-resect Exotropia intermitente: comparação dos resultados cirúrgicos entre retrocesso dos retos laterais e retrocesso-ressecção monocular

    Directory of Open Access Journals (Sweden)

    Vanessa Macedo Batista Fiorelli

    2007-06-01

    Full Text Available PURPOSE: To compare the results between recession of the lateral recti and monocular recess-resect procedure for the correction of the basic type of intermittent exotropia. METHODS: 115 patients with intermittent exotropia were submitted to surgery. The patients were divided into 4 groups, according to the magnitude of preoperative deviation and the surgical procedure was subsequently performed. Well compensated orthophoria or exo-or esophoria were considered surgical success, with minimum of 1 year follow-up after the operation. RESULTS: Success was obtained in 69% of the patients submitted to recession of the lateral recti, and in 77% submitted to monocular recess-resect. In the groups with deviations between 12 PD and 25 PD, surgical success was observed in 74% of the patients submitted to recession of the lateral recti and in 78% of the patients submitted to monocular recess-resect. (p=0.564. In the group with deviations between 26 PD and 35 PD, surgical success was observed in 65% out of the patients submitted to recession of the lateral recti and in 75% of the patients submitted to monocular recess-resect. (p=0.266. CONCLUSION: recession of lateral recti and monocular recess-resect were equally effective in correcting basic type intermittent exotropia according to its preoperative deviation in primary position.OBJETIVO: Comparar os resultados entre o retrocesso dos retos laterais e retrocesso-ressecção monocular, para correção de exotropia intermitente do tipo básico. MÉTODOS: Foram selecionados 115 prontuários de pacientes portadores de exotropia intermitente do tipo básico submetidos a cirurgia no período entre janeiro de 1991 e dezembro de 2001. Os planejamentos cirúrgicos seguiram orientação do setor de Motilidade Extrínseca Ocular da Clínica Oftalmológica da Santa Casa de São Paulo e basearam-se na magnitude do desvio na posição primária do olhar. Os pacientes foram divididos em 4 grupos, de acordo com a magnitude

  13. The perceived visual direction of monocular objects in random-dot stereograms is influenced by perceived depth and allelotropia.

    Science.gov (United States)

    Hariharan-Vilupuru, Srividhya; Bedell, Harold E

    2009-01-01

    The proposed influence of objects that are visible to both eyes on the perceived direction of an object that is seen by only one eye is known as the "capture of binocular visual direction". The purpose of this study was to evaluate whether stereoscopic depth perception is necessary for the "capture of binocular visual direction" to occur. In one pair of experiments, perceived alignment between two nearby monocular lines changed systematically with the magnitude and direction of horizontal but not vertical disparity. In four of the five observers, the effect of horizontal disparity on perceived alignment depended on which eye viewed the monocular lines. In additional experiments, the perceived alignment between the monocular lines changed systematically with the magnitude and direction of both horizontal and vertical disparities when the monocular line separation was increased from 1.1 degrees to 3.3 degrees . These results indicate that binocular capture depends on the perceived depth that results from horizontal retinal image disparity as well as allelotropia, or the averaging of local-sign information. Our data suggest that, during averaging, different weights are afforded to the local-sign information in the two eyes, depending on whether the separation between binocularly viewed targets is horizontal or vertical.

  14. Measuring perceived depth in natural images and study of its relation with monocular and binocular depth cues

    Science.gov (United States)

    Lebreton, Pierre; Raake, Alexander; Barkowsky, Marcus; Le Callet, Patrick

    2014-03-01

    The perception of depth in images and video sequences is based on different depth cues. Studies have considered depth perception threshold as a function of viewing distance (Cutting and Vishton, 1995), the combination of different monocular depth cues and their quantitative relation with binocular depth cues and their different possible type of interactions (Landy, l995). But these studies only consider artificial stimuli and none of them attempts to provide a quantitative contribution of monocular and binocular depth cues compared to each other in the specific context of natural images. This study targets this particular application case. The evaluation of the strength of different depth cues compared to each other using a carefully designed image database to cover as much as possible different combinations of monocular (linear perspective, texture gradient, relative size and defocus blur) and binocular depth cues. The 200 images were evaluated in two distinct subjective experiments to evaluate separately perceived depth and different monocular depth cues. The methodology and the description of the definition of the different scales will be detailed. The image database (DC3Dimg) is also released for the scientific community.

  15. Monocular SLAM for Visual Odometry: A Full Approach to the Delayed Inverse-Depth Feature Initialization Method

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2012-01-01

    Full Text Available This paper describes in a detailed manner a method to implement a simultaneous localization and mapping (SLAM system based on monocular vision for applications of visual odometry, appearance-based sensing, and emulation of range-bearing measurements. SLAM techniques are required to operate mobile robots in a priori unknown environments using only on-board sensors to simultaneously build a map of their surroundings; this map will be needed for the robot to track its position. In this context, the 6-DOF (degree of freedom monocular camera case (monocular SLAM possibly represents the harder variant of SLAM. In monocular SLAM, a single camera, which is freely moving through its environment, represents the sole sensory input to the system. The method proposed in this paper is based on a technique called delayed inverse-depth feature initialization, which is intended to initialize new visual features on the system. In this work, detailed formulation, extended discussions, and experiments with real data are presented in order to validate and to show the performance of the proposal.

  16. The Effect of Long Term Monocular Occlusion on Vernier Threshold: Elasticity in the Young Adult Visual System.

    Science.gov (United States)

    1986-06-01

    experiment, Brown and Salinger (1975) found a decrease of the X-cell 2 population in the lateral geniculate body of the adult cat. These investigators...D.L., and Salinger , W.L., "Loss of X-Cells in Lateral Geniculate Nucleus with Monocular Paralysis. Neural Plasticity in the Adult Cat", Science, 189

  17. [EXPERIMENTAL TESTING OF THE OPERATOR'S PERCEPTION OF SYMBOLIC INFORMATION ON THE HELMET-MOUNTED DISPLAY DEPENDING ON THE STRUCTURAL COMPLEXITY OF VISUAL ENVIRONMENT].

    Science.gov (United States)

    Lapa, V V; Ivanov, A I; Davydov, V V; Ryabinin, V A; Golosov S Yu

    2015-01-01

    The experiments showed that pilot's perception of symbolic information on the helmet-mounted display (HMD) depends on type of HMD (mono- or binocular), and structural complexity of the background image. Complex background extends time and increases errors in perception, particularly when monocular HMD is used. In extremely complicated visual situations (symbolic information on a background intricately structured by supposition of a TV image on real visual environment) significantly increases time and lowers precision of symbols perception no matter what the HMD type.

  18. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix

    2014-06-22

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  19. Lunar Sample Display Locations

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA provides a number of lunar samples for display at museums, planetariums, and scientific expositions around the world. Lunar displays are open to the public....

  20. Monocular denervation of visual nuclei modulates APP processing and sAPPα production: A possible role on neural plasticity.

    Science.gov (United States)

    Vasques, Juliana Ferreira; Heringer, Pedro Vinícius Bastos; Gonçalves, Renata Guedes de Jesus; Campello-Costa, Paula; Serfaty, Claudio Alberto; Faria-Melibeu, Adriana da Cunha

    2017-08-01

    Amyloid precursor protein (APP) is essential to physiological processes such as synapse formation and neural plasticity. Sequential proteolysis of APP by beta- and gamma-secretases generates amyloid-beta peptide (Aβ), the main component of senile plaques in Alzheimer Disease. Alternative APP cleavage by alpha-secretase occurs within Aβ domain, releasing soluble α-APP (sAPPα), a neurotrophic fragment. Among other functions, sAPPα is important to synaptogenesis, neural survival and axonal growth. APP and sAPPα levels are increased in models of neuroplasticity, which suggests an important role for APP and its metabolites, especially sAPPα, in the rearranging brain. In this work we analyzed the effects of monocular enucleation (ME), a classical model of lesion-induced plasticity, upon APP content, processing and also in secretases levels. Besides, we addressed whether α-secretase activity is crucial for retinotectal remodeling after ME. Our results showed that ME induced a transient reduction in total APP content. We also detected an increase in α-secretase expression and in sAPP production concomitant with a reduction in Aβ and β-secretase contents. These data suggest that ME facilitates APP processing by the non-amyloidogenic pathway, increasing sAPPα levels. Indeed, the pharmacological inhibition of α-secretase activity reduced the axonal sprouting of ipsilateral retinocollicular projections from the intact eye after ME, suggesting that sAPPα is necessary for synaptic structural rearrangement. Understanding how APP processing is regulated under lesion conditions may provide new insights into APP physiological role on neural plasticity. Copyright © 2017 ISDN. Published by Elsevier Ltd. All rights reserved.

  1. Performance studies of electrochromic displays

    Science.gov (United States)

    Ionescu, Ciprian; Dobre, Robert Alexandru

    2015-02-01

    The idea of having flexible, very thin, light, low power and even low cost display devices implemented using new materials and technologies is very exciting. Nowadays we can talk about more than just concepts, such devices exist, and they are part of an emerging concept: FOLAE (Flexible Organic and Large Area Electronics). Among the advantages of electrochromic devices are the low power consumption (they are non-emissive, i.e. passive) and the aspect like ink on paper with good viewing angle. Some studies are still necessary for further development, before proper performances are met and the functional behavior can be predicted. This paper presents the results of the research activity conducted to develop electric characterization platform for the organic electronics display devices, especially electrochromic displays, to permit a thorough study. The hardware part of platform permits the measuring of different electric and optical parameters. Charging/discharging a display element presents high interest for optimal driving circuitry. In this sense, the corresponding waveforms are presented. The contrast of the display is also measured for different operation conditions as driving voltage levels and duration. The effect of temperature on electrical and optical parameters (contrast) of the display will be also presented.

  2. Accurate and Robust Attitude Estimation Using MEMS Gyroscopes and a Monocular Camera

    Science.gov (United States)

    Kobori, Norimasa; Deguchi, Daisuke; Takahashi, Tomokazu; Ide, Ichiro; Murase, Hiroshi

    In order to estimate accurate rotations of mobile robots and vehicle, we propose a hybrid system which combines a low-cost monocular camera with gyro sensors. Gyro sensors have drift errors that accumulate over time. On the other hand, a camera cannot obtain the rotation continuously in the case where feature points cannot be extracted from images, although the accuracy is better than gyro sensors. To solve these problems we propose a method for combining these sensors based on Extended Kalman Filter. The errors of the gyro sensors are corrected by referring to the rotations obtained from the camera. In addition, by using the reliability judgment of camera rotations and devising the state value of the Extended Kalman Filter, even when the rotation is not continuously observable from the camera, the proposed method shows a good performance. Experimental results showed the effectiveness of the proposed method.

  3. Extracting hand articulations from monocular depth images using curvature scale space descriptors

    Institute of Scientific and Technical Information of China (English)

    Shao-fan WANG[1; Chun LI[1; De-hui KONG[1; Bao-cai YIN[2,1,3

    2016-01-01

    We propose a framework of hand articulation detection from a monocular depth image using curvature scale space (CSS) descriptors. We extract the hand contour from an input depth image, and obtain the fingertips and finger-valleys of the contour using the local extrema of a modified CSS map of the contour. Then we recover the undetected fingertips according to the local change of depths of points in the interior of the contour. Compared with traditional appearance-based approaches using either angle detectors or convex hull detectors, the modified CSS descriptor extracts the fingertips and finger-valleys more precisely since it is more robust to noisy or corrupted data; moreover, the local extrema of depths recover the fingertips of bending fingers well while traditional appearance-based approaches hardly work without matching models of hands. Experimental results show that our method captures the hand articulations more precisely compared with three state-of-the-art appearance-based approaches.

  4. Extracting hand articulations from monocular depth images using curvature scale space descriptors

    Institute of Scientific and Technical Information of China (English)

    Shao-fan WANG; Chun LI; De-hui KONG; Bao-cai YIN

    2016-01-01

    We propose a framework of hand articulation detection from a monocular depth image using curvature scale space (CSS) descriptors. We extract the hand contour from an input depth image, and obtain the fingertips and finger-valleys of the contour using the local extrema of a modified CSS map of the contour. Then we recover the undetected fingertips according to the local change of depths of points in the interior of the contour. Compared with traditional appearance-based approaches using either angle detectors or convex hull detectors, the modified CSS descriptor extracts the fingertips and finger-valleys more precisely since it is more robust to noisy or corrupted data;moreover, the local extrema of depths recover the fingertips of bending fingers well while traditional appearance-based approaches hardly work without matching models of hands. Experimental results show that our method captures the hand articulations more precisely compared with three state-of-the-art appearance-based approaches.

  5. Mobile Robot Simultaneous Localization and Mapping Based on a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Songmin Jia

    2016-01-01

    Full Text Available This paper proposes a novel monocular vision-based SLAM (Simultaneous Localization and Mapping algorithm for mobile robot. In this proposed method, the tracking and mapping procedures are split into two separate tasks and performed in parallel threads. In the tracking thread, a ground feature-based pose estimation method is employed to initialize the algorithm for the constraint moving of the mobile robot. And an initial map is built by triangulating the matched features for further tracking procedure. In the mapping thread, an epipolar searching procedure is utilized for finding the matching features. A homography-based outlier rejection method is adopted for rejecting the mismatched features. The indoor experimental results demonstrate that the proposed algorithm has a great performance on map building and verify the feasibility and effectiveness of the proposed algorithm.

  6. Navigation system for a small size lunar exploration rover with a monocular omnidirectional camera

    Science.gov (United States)

    Laîné, Mickaël.; Cruciani, Silvia; Palazzolo, Emanuele; Britton, Nathan J.; Cavarelli, Xavier; Yoshida, Kazuya

    2016-07-01

    A lunar rover requires an accurate localisation system in order to operate in an uninhabited environment. However, every additional piece of equipment mounted on it drastically increases the overall cost of the mission. This paper reports a possible solution for a micro-rover using a sole monocular omnidirectional camera. Our approach relies on a combination of feature tracking and template matching for Visual Odometry. The results are afterwards refined using a Graph-Based SLAM algorithm, which also provides a sparse reconstruction of the terrain. We tested the algorithm on a lunar rover prototype in a lunar analogue environment and the experiments show that the estimated trajectory is accurate and the combination with the template matching algorithm allows an otherwise poor detection of spot turns.

  7. The effect of monocular depth cues on the detection of moving objects by moving observers.

    Science.gov (United States)

    Royden, Constance S; Parsons, Daniel; Travatello, Joshua

    2016-07-01

    An observer moving through the world must be able to identify and locate moving objects in the scene. In principle, one could accomplish this task by detecting object images moving at a different angle or speed than the images of other items in the optic flow field. While angle of motion provides an unambiguous cue that an object is moving relative to other items in the scene, a difference in speed could be due to a difference in the depth of the objects and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects. We found that thresholds for detection of object motion decreased as we increased the number of depth cues available to the observer.

  8. Detection and Tracking Strategies for Autonomous Aerial Refuelling Tasks Based on Monocular Vision

    Directory of Open Access Journals (Sweden)

    Yingjie Yin

    2014-07-01

    Full Text Available Detection and tracking strategies based on monocular vision are proposed for autonomous aerial refuelling tasks. The drogue attached to the fuel tanker aircraft has two important features. The grey values of the drogue's inner part are different from the external umbrella ribs, as shown in the image. The shape of the drogue's inner dark part is nearly circular. According to crucial prior knowledge, the rough and fine positioning algorithms are designed to detect the drogue. Particle filter based on the drogue's shape is proposed to track the drogue. A strategy to switch between detection and tracking is proposed to improve the robustness of the algorithms. The inner dark part of the drogue is segmented precisely in the detecting and tracking process and the segmented circular part can be used to measure its spatial position. The experimental results show that the proposed method has good performance in real-time and satisfied robustness and positioning accuracy.

  9. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    Directory of Open Access Journals (Sweden)

    Ki-Yeong Park

    2014-01-01

    Full Text Available We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.

  10. Indoor Mobile Robot Navigation by Central Following Based on Monocular Vision

    Science.gov (United States)

    Saitoh, Takeshi; Tada, Naoya; Konishi, Ryosuke

    This paper develops the indoor mobile robot navigation by center following based on monocular vision. In our method, based on the frontal image, two boundary lines between the wall and baseboard are detected. Then, the appearance based obstacle detection is applied. When the obstacle exists, the avoidance or stop movement is worked according to the size and position of the obstacle, and when the obstacle does not exist, the robot moves at the center of the corridor. We developed the wheelchair based mobile robot. We estimated the accuracy of the boundary line detection, and obtained fast processing speed and high detection accuracy. We demonstrate the effectiveness of our mobile robot by the stopping experiments with various obstacles and moving experiments.

  11. A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM System

    Directory of Open Access Journals (Sweden)

    Antoni Grau

    2013-07-01

    Full Text Available Simultaneous localization and mapping (SLAM is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.

  12. A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.

    Science.gov (United States)

    Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni

    2013-07-03

    Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.

  13. c-FOS expression in the visual system of tree shrews after monocular inactivation.

    Science.gov (United States)

    Takahata, Toru; Kaas, Jon H

    2017-01-01

    Tree shrews possess an unusual segregation of ocular inputs to sublayers rather than columns in the primary visual cortex (V1). In this study, the lateral geniculate nucleus (LGN), superior colliculus (SC), pulvinar, and V1 were examined for changes in c-FOS, an immediate-early gene, expression after 1 or 24 hours of monocular inactivation with tetrodotoxin (TTX) in tree shrews. Monocular inactivation greatly reduced gene expression in LGN layers related to the blocked eye, whereas normally high to moderate levels were maintained in the layers that receive inputs from the intact eye. The SC and caudal pulvinar contralateral to the blocked eye had greatly (SC) or moderately (pulvinar) reduced gene expressions reflective of dependence on the contralateral eye. c-FOS expression in V1 was greatly reduced contralateral to the blocked eye, with most of the expression that remained in upper layer 4a and lower 4b and lower layer 6 regions. In contrast, much of V1 contralateral to the active eye showed normal levels of c-FOS expression, including the inner parts of sublayers 4a and 4b and layers 2, 3, and 6. In some cases, upper layer 4a and lower 4b showed a reduction of gene expression. Layers 5 and sublayer 3c had normally low levels of gene expression. The results reveal the functional dominance of the contralateral eye in activating the SC, pulvinar, and V1, and the results from V1 suggest that the sublaminar organization of layer 4 is more complex than previously realized. J. Comp. Neurol. 525:151-165, 2017. © 2016 Wiley Periodicals, Inc.

  14. Computational multi-projection display.

    Science.gov (United States)

    Moon, Seokil; Park, Soon-Gi; Lee, Chang-Kun; Cho, Jaebum; Lee, Seungjae; Lee, Byoungho

    2016-04-18

    A computational multi-projection display is proposed by employing a multi-projection system combining with compressive light field displays. By modulating the intensity of light rays from a spatial light modulator inside a single projector, the proposed system can offer several compact views to observer. Since light rays are spread to all directions, the system can provide flexible positioning of viewpoints without stacking projectors in vertical direction. Also, if the system is constructed properly, it is possible to generate view images with inter-pupillary gap and satisfy the super multi-view condition. We explain the principle of the proposed system and verify its feasibility with simulations and experimental results.

  15. Invisible Display in Aluminum

    DEFF Research Database (Denmark)

    Prichystal, Jan Phuklin; Hansen, Hans Nørgaard; Bladt, Henrik Henriksen

    2005-01-01

    Bang & Olufsen a/s has been working with ideas for invisible integration of displays in metal surfaces. Invisible integration of information displays traditionally has been possible by placing displays behind transparent or semitransparent materials such as plastic or glass. The wish for an integ...... be obtained by shining light from the backside of the workpiece. When there is no light from the backside, the front surface seems totally untouched. This was achieved by laser ablation with ultra-short pulses.......Bang & Olufsen a/s has been working with ideas for invisible integration of displays in metal surfaces. Invisible integration of information displays traditionally has been possible by placing displays behind transparent or semitransparent materials such as plastic or glass. The wish...... for an integrated display in a metal surface is often ruled by design and functionality of a product. The integration of displays in metal surfaces requires metal removal in order to clear the area of the display to some extent. The idea behind an invisible display in Aluminum concerns the processing of a metal...

  16. Polyplanar optic display

    Energy Technology Data Exchange (ETDEWEB)

    Veligdan, J.; Biscardi, C.; Brewster, C.; DeSanto, L. [Brookhaven National Lab., Upton, NY (United States). Dept. of Advanced Technology; Beiser, L. [Leo Beiser Inc., Flushing, NY (United States)

    1997-07-01

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 100 milliwatt green solid state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, the authors discuss the electronic interfacing to the DLP{trademark} chip, the opto-mechanical design and viewing angle characteristics.

  17. OLED displays and lighting

    CERN Document Server

    Koden, Mitsuhiro

    2017-01-01

    Organic light-emitting diodes (OLEDs) have emerged as the leading technology for the new display and lighting market. OLEDs are solid-state devices composed of thin films of organic molecules that create light with the application of electricity. OLEDs can provide brighter, crisper displays on electronic devices and use less power than conventional light-emitting diodes (LEDs) or liquid crystal displays (LCDs) used today. This book covers both the fundamentals and practical applications of flat and flexible OLEDs.

  18. Scalable Resolution Display Walls

    KAUST Repository

    Leigh, Jason

    2013-01-01

    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  19. JAVA Stereo Display Toolkit

    Science.gov (United States)

    Edmonds, Karina

    2008-01-01

    This toolkit provides a common interface for displaying graphical user interface (GUI) components in stereo using either specialized stereo display hardware (e.g., liquid crystal shutter or polarized glasses) or anaglyph display (red/blue glasses) on standard workstation displays. An application using this toolkit will work without modification in either environment, allowing stereo software to reach a wider audience without sacrificing high-quality display on dedicated hardware. The toolkit is written in Java for use with the Swing GUI Toolkit and has cross-platform compatibility. It hooks into the graphics system, allowing any standard Swing component to be displayed in stereo. It uses the OpenGL graphics library to control the stereo hardware and to perform the rendering. It also supports anaglyph and special stereo hardware using the same API (application-program interface), and has the ability to simulate color stereo in anaglyph mode by combining the red band of the left image with the green/blue bands of the right image. This is a low-level toolkit that accomplishes simply the display of components (including the JadeDisplay image display component). It does not include higher-level functions such as disparity adjustment, 3D cursor, or overlays all of which can be built using this toolkit.

  20. Monocular discs in the occlusion zones of binocular surfaces do not have quantitative depth--a comparison with Panum's limiting case.

    Science.gov (United States)

    Gillam, Barbara; Cook, Michael; Blackburn, Shane

    2003-01-01

    Da Vinci stereopsis is defined as apparent depth seen in a monocular object laterally adjacent to a binocular surface in a position consistent with its occlusion by the other eye. It is widely regarded as a new form of quantitative stereopsis because the depth seen is quantitatively related to the lateral separation of the monocular element and the binocular surface (Nakayama and Shimojo 1990 Vision Research 30 1811-1825). This can be predicted on the basis that the more separated the monocular element is from the surface the greater its minimum depth behind the surface would have to be to account for its monocular occlusion. Supporting evidence, however, has used narrow bars as the monocular elements, raising the possibility that quantitative depth as a function of separation could be attributable to Panum's limiting case (double fusion) rather than to a new form of stereopsis. We compared the depth performance of monocular objects fusible with the edge of the surface in the contralateral eye (lines) and non-fusible objects (disks) and found that, although the fusible objects showed highly quantitative depth, the disks did not, appearing behind the surface to the same degree at all separations from it. These findings indicate that, although there is a crude sense of depth for discrete monocular objects placed in a valid position for uniocular occlusion, depth is not quantitative. They also indicate that Panum's limiting case is not, as has sometimes been claimed, itself a case of da Vinci stereopsis since fusibility is a critical factor for seeing quantitative depth in discrete monocular objects relative to a binocular surface.

  1. Transposição monocular vertical dos músculos retos horizontais em pacientes esotrópicos portadores de anisotropia em A Monocular vertical displacement of the horizontal rectus muscles in esotropic patients with "A" pattern

    Directory of Open Access Journals (Sweden)

    Ana Carolina Toledo Dias

    2004-10-01

    Full Text Available OBJETIVO: Estudar a eficácia da transposição vertical monocular dos mús-culos retos horizontais, proposta por Goldstein, em pacientes esotrópicos portadores de anisotropia em A, sem hiperfunção de músculos oblíquos. MÉTODOS: Foram analisados, retrospectivamente, 23 prontuários de pacientes esotrópicos portadores de anisotropia em A > 10delta, submetidos a transposição vertical monocular dos músculos retos horizontais. Os pacientes foram divididos em 2 grupos, de acordo com a magnitude da incomitância pré-operatória; grupo 1 era composto de pacientes com desvio entre 11delta e 20delta e grupo 2 entre 21delta e 30delta. Foram considerados co-mo resultados satisfatórios as correções com A PURPOSE: To report the effectiveness of the vertical monocular displacement of the horizontal rectus muscles, proposed by Goldstein, in esotropic patients with A pattern, without oblique muscle overaction. METHODS: A retrospective study was performed using the charts of 23 esotropic patients with A pattern > 10delta, submitted to vertical monocular displacement of the horizontal rectus muscles. The patients were divided into 2 groups in agreement with the magnitude of the preoperative deviation, group 1 (11delta to 20delta and group 2 (21delta to 30delta. Satisfactory results were considered when corrections A < 10delta or V < 15delta were obtained. RESULTS: The average of absolute correction was, in group 1, 16.5delta and, in group 2, 16.6delta. In group 1, 91.6% of the patients presented satisfactory surgical results and in group 2, 81.8% (p = 0.468. CONCLUSION: The surgical procedure, proposed by Goldstein, is effective and there was no statistical difference between the magnitude of the preoperative anisotropia and the obtained correction.

  2. 单目视觉同步定位与地图创建方法综述%A survey of monocular simultaneous localization and mapping

    Institute of Scientific and Technical Information of China (English)

    顾照鹏; 刘宏

    2015-01-01

    随着计算机视觉技术的发展,基于单目视觉的同步定位与地图创建( monocular SLAM)逐渐成为计算机视觉领域的热点问题之一。介绍了单目视觉SLAM方法的分类,从视觉特征检测与匹配、数据关联的优化、特征点深度的获取、地图的尺度控制几个方面阐述了单目视觉SLAM研究的发展现状。最后,介绍了常见的单目视觉与其他传感器结合的SLAM方法,并探讨了单目视觉SLAM未来的研究方向。%With the development of computer vision technology, monocular simultaneous localization and mapping ( monocular SLAM) has gradually become one of the hot issues in the field of computer vision.This paper intro-duces the monocular vision SLAM classification that relates to the present status of research in monocular SLAM methods from several aspects, including visual feature detection and matching, optimization of data association, depth acquisition of feature points, and map scale control.Monocular SLAM methods combining with other sensors are reviewed and significant issues needing further study are discussed.

  3. Embedding perspective cue in holographic projection display by virtual variable-focal-length lenses

    Science.gov (United States)

    Li, Zhaohui; Zhang, Jianqi; Wang, Xiaorui; Zhao, Fuliang

    2014-10-01

    To make a view perspective cue emerging in reconstructed images, a new approach is proposed by incorporating virtual variable-focal-length lenses into computer generated Fourier hologram (CGFH). This approach is based on a combination of monocular vision principle and digital hologram display, thus it owns properties coming from the two display models simultaneously. Therefore, it can overcome the drawback of the unsatisfied visual depth perception of the reconstructed three-dimensional (3D) images in holographic projection display (HPD). Firstly, an analysis on characteristics of conventional CGFH reconstruction is made, which indicates that a finite depthof- focus and a non-adjustable lateral magnification are reasons of the depth information lack on a fixed image plane. Secondly, the principle of controlling lateral magnification in wave-front reconstructions by virtual lenses is demonstrated. And the relation model is deduced, involving the depth of object, the parameters of virtual lenses, and the lateral magnification. Next, the focal-lengths of virtual lenses are determined by considering perspective distortion of human vision. After employing virtual lenses in the CGFH, the reconstructed image on focal-plane can deliver the same depth cues as that of the monocular stereoscopic image. Finally, the depthof- focus enhancement produced by a virtual lens and the effect on the reconstruction quality from the virtual lens are described. Numerical simulation and electro-optical reconstruction experimental results prove that the proposed algorithm can improve the depth perception of the reconstructed 3D image in HPD. The proposed method provides a possibility of uniting multiple display models to enhance 3D display performance and viewer experience.

  4. Standardizing visual display quality

    NARCIS (Netherlands)

    Besuijen, Ko; Spenkelink, Gerd P.J.

    1998-01-01

    The current ISO 9241–3 standard for visual display quality and the proposed user performance tests are reviewed. The standard is found to be more engineering than ergonomic and problems with system configuration, software applications, display settings, user behaviour, wear and physical environment

  5. Polyplanar optical display electronics

    Energy Technology Data Exchange (ETDEWEB)

    DeSanto, L.; Biscardi, C. [Brookhaven National Lab., Upton, NY (United States). Dept. of Advanced Technology

    1997-07-01

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. The prototype ten inch display is two inches thick and has a matte black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. In order to achieve a long lifetime, the new display uses a 100 milliwatt green solid-state laser (10,000 hr. life) at 532 nm as its light source. To produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP{trademark}) chip manufactured by Texas Instruments. In order to use the solid-state laser as the light source and also fit within the constraints of the B-52 display, the Digital Micromirror Device (DMD{trademark}) circuit board is removed from the Texas Instruments DLP light engine assembly. Due to the compact architecture of the projection system within the display chassis, the DMD{trademark} chip is operated remotely from the Texas Instruments circuit board. The authors discuss the operation of the DMD{trademark} divorced from the light engine and the interfacing of the DMD{trademark} board with various video formats (CVBS, Y/C or S-video and RGB) including the format specific to the B-52 aircraft. A brief discussion of the electronics required to drive the laser is also presented.

  6. Visual merchandising window display

    Directory of Open Access Journals (Sweden)

    Opris (Cas. Stanila M.

    2013-12-01

    Full Text Available Window display plays a major part in the selling strategies; it does not only include the simple display of goods, nowadays it is a form of art, also having the purpose of sustaining the brand image. This article wants to reveal the tools that are essential in creating a fabulous window display. Being a window designer is not an easy job, you have to always think ahead trends, to have a sense of colour, to know how to use light to attract customers in the store after only one glance at the window. The big store window displays are theatre scenes: with expensive backgrounds, special effects and high fashion mannequins. The final role of the displays is to convince customers to enter the store and trigger the purchasing act which is the final goal of the retail activity.

  7. Defense display market assessment

    Science.gov (United States)

    Desjardins, Daniel D.; Hopper, Darrel G.

    1998-09-01

    This paper addresses the number, function and size of principal military displays and establishes a basis to determine the opportunities for technology insertion in the immediate future and into the next millennium. Principal military displays are defined as those occupying appreciable crewstation real-estate and/or those without which the platform could not carry out its intended mission. DoD 'office' applications are excluded from this study. The military displays market is specified by such parameters as active area and footprint size, and other characteristics such as luminance, gray scale, resolution, angle, color, video capability, and night vision imaging system (NVIS) compatibility. Funded, future acquisitions, planned and predicted crewstation modification kits, and form-fit upgrades are taken into account. This paper provides an overview of the DoD niche market, allowing both government and industry a necessary reference by which to meet DoD requirements for military displays in a timely and cost-effective manner. The aggregate DoD market for direct-view and large-area military displays is presently estimated to be in excess of 242,000. Miniature displays are those which must be magnified to be viewed, involve a significantly different manufacturing paradigm and are used in helmet mounted displays and thermal weapon sight applications. Some 114,000 miniature displays are presently included within Service weapon system acquisition plans. For vendor production planning purposes it is noted that foreign military sales could substantially increase these quantities. The vanishing vendor syndrome (VVS) for older display technologies continues to be a growing, pervasive problem throughout DoD, which consequently must leverage the more modern display technologies being developed for civil- commercial markets.

  8. Capturing age-related changes in functional contrast sensitivity with decreasing light levels in monocular and binocular vision

    OpenAIRE

    Gillespie-Gallery, H.; Konstantakopoulou, E.; HARLOW, J.A.; Barbur, J. L.

    2013-01-01

    Purpose: It is challenging to separate the effects of normal aging of the retina and visual pathways independently from optical factors, decreased retinal illuminance and early stage disease. This study determined limits to describe the effect of light level on normal, age-related changes in monocular and binocular functional contrast sensitivity. Methods: 95 participants aged 20 to 85 were recruited. Contrast thresholds for correct orientation discrimination of the gap in a Landolt C opt...

  9. Striations in Plasma Display Panel

    Institute of Scientific and Technical Information of China (English)

    OUYANG Ji-Ting; CAO Jing; MIAO Jin-Song

    2005-01-01

    @@ The phenomenon of striation has been investigated experimentally in a macroscopic ac-plasma display panel (PDP). The relationship between the characteristics of striation and the operation conditions including voltage, frequency, rib, and electrode configuration, etc is obtained experimentally. The origin of the striations is considered to be the ionization waves in the transient positive column near the dielectric surface in the anode area during the discharge, and the perturbation is caused by resonance kinetic effects in inert gas.

  10. Panoramic projection avionics displays

    Science.gov (United States)

    Kalmanash, Michael H.

    2003-09-01

    Avionics projection displays are entering production in advanced tactical aircraft. Early adopters of this technology in the avionics community used projection displays to replace or upgrade earlier units incorporating direct-view CRT or AMLCD devices. Typical motivation for these upgrades were the alleviation of performance, cost and display device availability concerns. In these systems, the upgraded (projection) displays were one-for-one form / fit replacements for the earlier units. As projection technology has matured, this situation has begun to evolve. The Lockheed-Martin F-35 is the first program in which the cockpit has been specifically designed to take advantage of one of the more unique capabilities of rear projection display technology, namely the ability to replace multiple small screens with a single large conformal viewing surface in the form of a panoramic display. Other programs are expected to follow, since the panoramic formats enable increased mission effectiveness, reduced cost and greater information transfer to the pilot. Some of the advantages and technical challenges associated with panoramic projection displays for avionics applications are described below.

  11. Quality of life in patients with age-related macular degeneration with monocular and binocular legal blindness Qualidade de vida de pacientes com degeneração macular relacionada à idade com cegueira legal monocular e binocular

    Directory of Open Access Journals (Sweden)

    Roberta Ferrari Marback

    2007-01-01

    Full Text Available OBJECTIVE: To evaluate the quality of life for persons affected by age-related macular degeneration that results in monocular or binocular legal blindness. METHODS: An analytic transversal study using the National Eye Institute Visual Functioning Questionnaire (NEI VFQ-25 was performed. Inclusion criteria were persons of both genders, aged more than 50 years old, absence of cataracts, diagnosis of age-related monocular degeneration in at least one eye and the absence of other macular diseases. The control group was paired by sex, age and no ocular disease. RESULTS: Group 1 (monocular legal blindness was composed of 54 patients (72.22% females and 27.78% males, aged 51 to 87 years old, medium age 74.61 ± 7.27 years; group 2 (binocular legal blindness was composed of 54 patients (46.30% females and 53.70% males aged 54 to 87 years old, medium age 75.61 ± 6.34 years. The control group was composed of 40 patients (40% females and 60% males, aged 50 to 81 years old, medium age 65.65 ± 7.56 years. The majority of the scores were statistically significantly higher in group 1 and the control group in relation to group 2 and higher in the control group when compared to group 1. CONCLUSIONS: It was evident that the quality of life of persons with binocular blindness was more limited in relation to persons with monocular blindness. Both groups showed significant impairment in quality of life when compared to normal persons.OBJETIVO: Avaliar a qualidade de vida de portadores de degeneração macular relacionada à idade com cegueira legal monocular e binocular. MÉTODOS: Foi realizado estudo transversal analítico por meio do questionário National Eye Institute Visual Functioning Questionnaire (NEI VFQ-25. Os critérios de inclusão foram: indivíduos de ambos os sexos, idade maior que 50 anos, ausência de catarata, diagnóstico de degeneração macular relacionada à idade avançada em pelo menos um dos olhos, sem outras maculopatias. O Grupo Controle

  12. Visual perceptual issues of the integrated helmet and display sighting system (IHADSS): four expert perspectives

    Science.gov (United States)

    Rash, Clarence E.; Heinecke, Kevin; Francis, Gregory; Hiatt, Keith L.

    2008-04-01

    The Integrated Helmet and Display Sighting System (IHADSS) helmet-mounted display (HMD) has been flown for over a quarter of a century on the U.S. Army's AH-64 Apache Attack Helicopter. The aircraft's successful deployment in both peacetime and combat has validated the original design concept for the IHADSS HMD. During its 1970s development phase, a number of design issues were identified as having the potential of introducing visual perception problems for aviators. These issues include monocular design, monochromatic imagery, reduced field-of-view (FOV), sensor spectrum, reduced resolution (effective visual acuity), and displaced visual input eye point. From their diverse perspectives, a panel of four experts - an HMD researcher, a cognitive psychologist, a flight surgeon, and a veteran AH-64 aviator - discuss the impact of the design issues on visual perception and related performance.

  13. Small - Display Cartography

    DEFF Research Database (Denmark)

    Nissen, Flemming; Hvas, Anders; Münster-Swendsen, Jørgen

    This report comprises the work carried out in the work-package of small display cartography. The work-package has aimed at creating a general framework for the small-display cartography. A solid framework facilitates an increased use of spatial data in mobile devices - thus enabling, together...... with the rapidly evolving positioning techniques, a new category of position-dependent, map-based services to be introduced. The report consists of the following parts: Part I: Categorization of handheld devices, Part II: Cartographic design for small-display devices, Part III: Study on the GiMoDig Client ? Portal...... Service Communication and finally, Part IV: Concluding remarks and topics for further research on small-display cartography. Part II includes a separate Appendix D consisting of a cartographic design specification. Part III includes a separate Appendix C consisting of a schema specification, a separate...

  14. Flexible displays, rigid designs?

    DEFF Research Database (Denmark)

    Hornbæk, Kasper

    2015-01-01

    Rapid technological progress has enabled a wide range of flexible displays for computing devices, but the user experience--which we're only beginning to understand--will be the key driver for successful designs....

  15. Monocular and binocular steady-state flicker VEPs: frequency-response functions to sinusoidal and square-wave luminance modulation.

    Science.gov (United States)

    Nicol, David S; Hamilton, Ruth; Shahani, Uma; McCulloch, Daphne L

    2011-02-01

    Steady-state VEPs to full-field flicker (FFF) using sinusoidally modulated light were compared with those elicited by square-wave modulated light across a wide range of stimulus frequencies with monocular and binocular FFF stimulation. Binocular and monocular VEPs were elicited in 12 adult volunteers to FFF with two modes of temporal modulation: sinusoidal or square-wave (abrupt onset and offset, 50% duty cycle) at ten temporal frequencies ranging from 2.83 to 58.8 Hz. All stimuli had a mean luminance of 100 cd/m(2) with an 80% modulation depth (20-180 cd/m(2)). Response magnitudes at the stimulus frequency (F1) and at the double and triple harmonics (F2 and F3) were compared. For both sinusoidal and square-wave flicker, the FFF-VEP magnitudes at F1 were maximal for 7.52 Hz flicker. F2 was maximal for 5.29 Hz flicker, and F3 magnitudes are largest for flicker stimulation from 3.75 to 7.52 Hz. Square-wave flicker produced significantly larger F1 and F2 magnitudes for slow flicker rates (up to 5.29 Hz for F1; at 2.83 and 3.75 Hz for F2). The F3 magnitudes were larger overall for square-wave flicker. Binocular FFF-VEP magnitudes are larger than those of monocular FFF-VEPs, and the amount of this binocular enhancement is not dependant on the mode of flicker stimulation (mean binocular: monocular ratio 1.41, 95% CI: 1.2-1.6). Binocular enhancement of F1 for 21.3 Hz flicker was increased to a factor of 2.5 (95% CI: 1.8-3.5). In the healthy adult visual system, FFF-VEP magnitudes can be characterized by the frequency-response functions of F1, F2 and F3. Low-frequency roll-off in the FFF-VEP magnitudes is greater for sinusoidal flicker than for square-wave flicker for rates ≤ 5.29 Hz; magnitudes for higher-frequency flicker are similar for the two types of flicker. Binocular FFF-VEPs are larger overall than those recorded monocularly, and this binocular summation is enhanced at 21.3 Hz in the mid-frequency range.

  16. Liquid Crystal Airborne Display

    Science.gov (United States)

    1977-08-01

    81/2X 11- 10 -9 .8 display using a large advertising alphanimeric ( TCI ) has been added to the front of the optical box used in the F-4 aircraft for HUD...properties over a wide range of tempera - tures, including normal room temperature. What are Liquid Crystals? Liquid crystals have been classified in three...natic fanctions and to present data needed for the semi- automatic and manual control of system functions. Existing aircraft using CRT display

  17. Military display performance parameters

    Science.gov (United States)

    Desjardins, Daniel D.; Meyer, Frederick

    2012-06-01

    The military display market is analyzed in terms of four of its segments: avionics, vetronics, dismounted soldier, and command and control. Requirements are summarized for a number of technology-driving parameters, to include luminance, night vision imaging system compatibility, gray levels, resolution, dimming range, viewing angle, video capability, altitude, temperature, shock and vibration, etc., for direct-view and virtual-view displays in cockpits and crew stations. Technical specifications are discussed for selected programs.

  18. Raster graphics display library

    Science.gov (United States)

    Grimsrud, Anders; Stephenson, Michael B.

    1987-01-01

    The Raster Graphics Display Library (RGDL) is a high level subroutine package that give the advanced raster graphics display capabilities needed. The RGDL uses FORTRAN source code routines to build subroutines modular enough to use as stand-alone routines in a black box type of environment. Six examples are presented which will teach the use of RGDL in the fastest, most complete way possible. Routines within the display library that are used to produce raster graphics are presented in alphabetical order, each on a separate page. Each user-callable routine is described by function and calling parameters. All common blocks that are used in the display library are listed and the use of each variable within each common block is discussed. A reference on the include files that are necessary to compile the display library is contained. Each include file and its purpose are listed. The link map for MOVIE.BYU version 6, a general purpose computer graphics display system that uses RGDL software, is also contained.

  19. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles.

    Science.gov (United States)

    Huang, Kuo-Lung; Chiu, Chung-Cheng; Chiu, Sheng-Yi; Teng, Yao-Jen; Hao, Shu-Sheng

    2015-07-13

    The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs) has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft's nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft's nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  20. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    Science.gov (United States)

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  1. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Kuo-Lung Huang

    2015-07-01

    Full Text Available The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  2. A Height Estimation Approach for Terrain Following Flights from Monocular Vision

    Directory of Open Access Journals (Sweden)

    Igor S. G. Campos

    2016-12-01

    Full Text Available In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.

  3. A Height Estimation Approach for Terrain Following Flights from Monocular Vision.

    Science.gov (United States)

    Campos, Igor S G; Nascimento, Erickson R; Freitas, Gustavo M; Chaimowicz, Luiz

    2016-12-06

    In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.

  4. RBF-Based Monocular Vision Navigation for Small Vehicles in Narrow Space below Maize Canopy

    Directory of Open Access Journals (Sweden)

    Lu Liu

    2016-06-01

    Full Text Available Maize is one of the major food crops in China. Traditionally, field operations are done by manual labor, where the farmers are threatened by the harsh environment and pesticides. On the other hand, it is difficult for large machinery to maneuver in the field due to limited space, particularly in the middle and late growth stage of maize. Unmanned, compact agricultural machines, therefore, are ideal for such field work. This paper describes a method of monocular visual recognition to navigate small vehicles between narrow crop rows. Edge detection and noise elimination were used for image segmentation to extract the stalks in the image. The stalk coordinates define passable boundaries, and a simplified radial basis function (RBF-based algorithm was adapted for path planning to improve the fault tolerance of stalk coordinate extraction. The average image processing time, including network latency, is 220 ms. The average time consumption for path planning is 30 ms. The fast processing ensures a top speed of 2 m/s for our prototype vehicle. When operating at the normal speed (0.7 m/s, the rate of collision with stalks is under 6.4%. Additional simulations and field tests further proved the feasibility and fault tolerance of our method.

  5. Maximum Likelihood Estimation of Monocular Optical Flow Field for Mobile Robot Ego-motion

    Directory of Open Access Journals (Sweden)

    Huajun Liu

    2016-01-01

    Full Text Available This paper presents an optimized scheme of monocular ego-motion estimation to provide location and pose information for mobile robots with one fixed camera. First, a multi-scale hyper-complex wavelet phase-derived optical flow is applied to estimate micro motion of image blocks. Optical flow computation overcomes the difficulties of unreliable feature selection and feature matching of outdoor scenes; at the same time, the multi-scale strategy overcomes the problem of road surface self-similarity and local occlusions. Secondly, a support probability of flow vector is defined to evaluate the validity of the candidate image motions, and a Maximum Likelihood Estimation (MLE optical flow model is constructed based not only on image motion residuals but also their distribution of inliers and outliers, together with their support probabilities, to evaluate a given transform. This yields an optimized estimation of inlier parts of optical flow. Thirdly, a sampling and consensus strategy is designed to estimate the ego-motion parameters. Our model and algorithms are tested on real datasets collected from an intelligent vehicle. The experimental results demonstrate the estimated ego-motion parameters closely follow the GPS/INS ground truth in complex outdoor road scenarios.

  6. Acute Myeloid Leukemia Relapse Presenting as Complete Monocular Vision Loss due to Optic Nerve Involvement

    Directory of Open Access Journals (Sweden)

    Shyam A. Patel

    2016-01-01

    Full Text Available Acute myeloid leukemia (AML involvement of the central nervous system is relatively rare, and detection of leptomeningeal disease typically occurs only after a patient presents with neurological symptoms. The case herein describes a 48-year-old man with relapsed/refractory AML of the mixed lineage leukemia rearrangement subtype, who presents with monocular vision loss due to leukemic eye infiltration. MRI revealed right optic nerve sheath enhancement and restricted diffusion concerning for nerve ischemia and infarct from hypercellularity. Cerebrospinal fluid (CSF analysis showed a total WBC count of 81/mcl with 96% AML blasts. The onset and progression of visual loss were in concordance with rise in peripheral blood blast count. A low threshold for diagnosis of CSF involvement should be maintained in patients with hyperleukocytosis and high-risk cytogenetics so that prompt treatment with whole brain radiation and intrathecal chemotherapy can be delivered. This case suggests that the eye, as an immunoprivileged site, may serve as a sanctuary from which leukemic cells can resurge and contribute to relapsed disease in patients with high-risk cytogenetics.

  7. Cross-Covariance Estimation for Ekf-Based Inertial Aided Monocular Slam

    Science.gov (United States)

    Kleinert, M.; Stilla, U.

    2011-04-01

    Repeated observation of several characteristically textured surface elements allows the reconstruction of the camera trajectory and a sparse point cloud which is often referred to as "map". The extended Kalman filter (EKF) is a popular method to address this problem, especially if real-time constraints have to be met. Inertial measurements as well as a parameterization of the state vector that conforms better to the linearity assumptions made by the EKF may be employed to reduce the impact of linearization errors. Therefore, we adopt an inertial-aided monocular SLAM approach where landmarks are parameterized in inverse depth w.r.t. the coordinate system in which they were observed for the first time. In this work we present a method to estimate the cross-covariances between landmarks which are introduced in the EKF state vector for the first time and the old filter state that can be applied in the special case at hand where each landmark is parameterized w.r.t. an individual coordinate system.

  8. Exploiting Depth From Single Monocular Images for Object Detection and Semantic Segmentation

    Science.gov (United States)

    Cao, Yuanzhouhan; Shen, Chunhua; Shen, Heng Tao

    2017-02-01

    Augmenting RGB data with measured depth has been shown to improve the performance of a range of tasks in computer vision including object detection and semantic segmentation. Although depth sensors such as the Microsoft Kinect have facilitated easy acquisition of such depth information, the vast majority of images used in vision tasks do not contain depth information. In this paper, we show that augmenting RGB images with estimated depth can also improve the accuracy of both object detection and semantic segmentation. Specifically, we first exploit the recent success of depth estimation from monocular images and learn a deep depth estimation model. Then we learn deep depth features from the estimated depth and combine with RGB features for object detection and semantic segmentation. Additionally, we propose an RGB-D semantic segmentation method which applies a multi-task training scheme: semantic label prediction and depth value regression. We test our methods on several datasets and demonstrate that incorporating information from estimated depth improves the performance of object detection and semantic segmentation remarkably.

  9. Why is binocular rivalry uncommon? Discrepant monocular images in the real world

    Directory of Open Access Journals (Sweden)

    Derek Henry Arnold

    2011-10-01

    Full Text Available When different images project to corresponding points in the two eyes they can instigate a phenomenon called binocular rivalry (BR, wherein each image seems to intermittently disappear such that only one of the two images is seen at a time. Cautious readers may have noted an important caveat in the opening sentence – this situation can instigate BR, but usually it doesn’t. Unmatched monocular images are frequently encountered in daily life due to either differential occlusions of the two eyes or because of selective obstructions of just one eye, but this does not tend to induce BR. Here I will explore the reasons for this and discuss implications for BR in general. It will be argued that BR is resolved in favour of the instantaneously stronger neural signal, and that this process is driven by an adaptation that enhances the visibility of distant fixated objects over that of more proximate obstructions of an eye. Accordingly, BR would reflect the dynamics of an inherently visual operation that usually deals with real-world constraints.

  10. A Height Estimation Approach for Terrain Following Flights from Monocular Vision

    Science.gov (United States)

    Campos, Igor S. G.; Nascimento, Erickson R.; Freitas, Gustavo M.; Chaimowicz, Luiz

    2016-01-01

    In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80% for positives and 90% for negatives, while the height estimation algorithm presented good accuracy. PMID:27929424

  11. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    Directory of Open Access Journals (Sweden)

    Tae-Jae Lee

    2016-03-01

    Full Text Available This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.

  12. Development of an indoor positioning and navigation system using monocular SLAM and IMU

    Science.gov (United States)

    Mai, Yu-Ching; Lai, Ying-Chih

    2016-07-01

    The positioning and navigation systems based on Global Positioning System (GPS) have been developed over past decades and have been widely used for outdoor environment. However, high-rise buildings or indoor environments can block the satellite signal. Therefore, many indoor positioning methods have been developed to respond to this issue. In addition to the distance measurements using sonar and laser sensors, this study aims to develop a method by integrating a monocular simultaneous localization and mapping (MonoSLAM) algorithm with an inertial measurement unit (IMU) to build an indoor positioning system. The MonoSLAM algorithm measures the distance (depth) between the image features and the camera. With the help of Extend Kalman Filter (EKF), MonoSLAM can provide real-time position, velocity and camera attitude in world frame. Since the feature points will not always appear and can't be trusted at any time, a wrong estimation of the features will cause the estimated position diverge. To overcome this problem, a multisensor fusion algorithm was applied in this study by using the multi-rate Kalman Filter. Finally, from the experiment results, the proposed system was verified to be able to improve the reliability and accuracy of the MonoSLAM by integrating the IMU measurements.

  13. Dynamic plasmonic colour display

    Science.gov (United States)

    Duan, Xiaoyang; Kamin, Simon; Liu, Na

    2017-02-01

    Plasmonic colour printing based on engineered metasurfaces has revolutionized colour display science due to its unprecedented subwavelength resolution and high-density optical data storage. However, advanced plasmonic displays with novel functionalities including dynamic multicolour printing, animations, and highly secure encryption have remained in their infancy. Here we demonstrate a dynamic plasmonic colour display technique that enables all the aforementioned functionalities using catalytic magnesium metasurfaces. Controlled hydrogenation and dehydrogenation of the constituent magnesium nanoparticles, which serve as dynamic pixels, allow for plasmonic colour printing, tuning, erasing and restoration of colour. Different dynamic pixels feature distinct colour transformation kinetics, enabling plasmonic animations. Through smart material processing, information encoded on selected pixels, which are indiscernible to both optical and scanning electron microscopies, can only be read out using hydrogen as a decoding key, suggesting a new generation of information encryption and anti-counterfeiting applications.

  14. The Ultimate Display

    CERN Document Server

    Fluke, C J

    2016-01-01

    Astronomical images and datasets are increasingly high-resolution and multi-dimensional. The vast majority of astronomers perform all of their visualisation and analysis tasks on low-resolution, two-dimensional desktop monitors. If there were no technological barriers to designing the ultimate stereoscopic display for astronomy, what would it look like? What capabilities would we require of our compute hardware to drive it? And are existing technologies even close to providing a true 3D experience that is compatible with the depth resolution of human stereoscopic vision? We consider the CAVE2 (an 80 Megapixel, hybrid 2D and 3D virtual reality environment directly integrated with a 100 Tflop/s GPU-powered supercomputer) and the Oculus Rift (a low- cost, head-mounted display) as examples at opposite financial ends of the immersive display spectrum.

  15. TrkA activation in the rat visual cortex by antirat trkA IgG prevents the effect of monocular deprivation.

    Science.gov (United States)

    Pizzorusso, T; Berardi, N; Rossi, F M; Viegi, A; Venstrom, K; Reichardt, L F; Maffei, L

    1999-01-01

    It has been recently shown that intraventricular injections of nerve growth factor (NGF) prevent the effects of monocular deprivation in the rat. We have tested the localization and the molecular nature of the NGF receptor(s) responsible for this effect by activating cortical trkA receptors in monocularly deprived rats by cortical infusion of a specific agonist of NGF on trkA, the bivalent antirat trkA IgG (RTA-IgG). TrkA protein was detected by immunoblot in the rat visual cortex during the critical period. Rats were monocularly deprived for 1 week (P21-28) and RTA-IgG or control rabbit IgG were delivered by osmotic minipumps. The effects of monocular deprivation on the ocular dominance of visual cortical neurons were assessed by extracellular single cell recordings. We found that the shift towards the ipsilateral, non-deprived eye was largely prevented by RTA-IgG. Infusion of RTA-IgG combined with antibody that blocks p75NTR (REX), slightly reduced RTA-IgG effectiveness in preventing monocular deprivation effects. These results suggest that NGF action in visual cortical plasticity is mediated by cortical TrkA receptors with p75NTR exerting a facilitatory role.

  16. A Novel Approach to Surgical Instructions for Scrub Nurses by Using See-Through-Type Head-Mounted Display.

    Science.gov (United States)

    Yoshida, Soichiro; Sasaki, Asami; Sato, Chikage; Yamazaki, Mutsuko; Takayasu, Junya; Tanaka, Naofumi; Okabayashi, Norie; Hirano, Hiromi; Saito, Kazutaka; Fujii, Yasuhisa; Kihara, Kazunori

    2015-08-01

    In order to facilitate assists in surgical procedure, it is important for scrub nurses to understand the operation procedure and to share the operation status with attending surgeons. The potential utility of head-mounted display as a new imaging monitor has been proposed in the medical field. This study prospectively evaluated the usefulness of see-through-type head-mounted display as a novel intraoperative instructional tool for scrub nurses. From January to March 2014, scrub nurses who attended gasless laparoendoscopic single-port radical nephrectomy and radical prostatectomy wore the monocular see-through-type head-mounted display (AiRScouter; Brother Industries Ltd, Nagoya, Japan) displaying the instruction of the operation procedure through a crystal panel in front of the eye. Following the operation, the participants completed an anonymous questionnaire, which evaluated the image quality of the head-mounted display, the helpfulness of the head-mounted display to understand the operation procedure, and adverse effects caused by the head-mounted display. Fifteen nurses were eligible for the analysis. The intraoperative use of the head-mounted display could help scrub nurses to understand the surgical procedure and to hand out the instruments for the operation with no major head-mounted-display wear-related adverse event. This novel approach to support scrub nurses will help facilitate technical and nontechnical skills during surgery.

  17. Refreshing Refreshable Braille Displays.

    Science.gov (United States)

    Russomanno, Alexander; O'Modhrain, Sile; Gillespie, R Brent; Rodger, Matthew W M

    2015-01-01

    The increased access to books afforded to blind people via e-publishing has given them long-sought independence for both recreational and educational reading. In most cases, blind readers access materials using speech output. For some content such as highly technical texts, music, and graphics, speech is not an appropriate access modality as it does not promote deep understanding. Therefore blind braille readers often prefer electronic braille displays. But, these are prohibitively expensive. The search is on, therefore, for a low-cost refreshable display that would go beyond current technologies and deliver graphical content as well as text. And many solutions have been proposed, some of which reduce costs by restricting the number of characters that can be displayed, even down to a single braille cell. In this paper, we demonstrate that restricting tactile cues during braille reading leads to poorer performance in a letter recognition task. In particular, we show that lack of sliding contact between the fingertip and the braille reading surface results in more errors and that the number of errors increases as a function of presentation speed. These findings suggest that single cell displays which do not incorporate sliding contact are likely to be less effective for braille reading.

  18. Virtual Auditory Displays

    Science.gov (United States)

    2000-01-01

    timbre , intensity, distance, room modeling, radio communication Virtual Environments Handbook Chapter 4 Virtual Auditory Displays Russell D... musical note “A” as a pure sinusoid, there will be 440 condensations and rarefactions per second. The distance between two adjacent condensations or...and complexity are pitch, loudness, and timbre respectively. This distinction between physical and perceptual measures of sound properties is an

  19. Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs

    Science.gov (United States)

    Al-Kaff, Abdulla; García, Fernando; Martín, David; De La Escalera, Arturo; Armingol, José María

    2017-01-01

    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works. PMID:28481277

  20. Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs.

    Science.gov (United States)

    Al-Kaff, Abdulla; García, Fernando; Martín, David; De La Escalera, Arturo; Armingol, José María

    2017-05-07

    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works.

  1. Monocular distance estimation with optical flow maneuvers and efference copies: a stability-based strategy.

    Science.gov (United States)

    de Croon, Guido C H E

    2016-01-07

    The visual cue of optical flow plays an important role in the navigation of flying insects, and is increasingly studied for use by small flying robots as well. A major problem is that successful optical flow control seems to require distance estimates, while optical flow is known to provide only the ratio of velocity to distance. In this article, a novel, stability-based strategy is proposed for monocular distance estimation, relying on optical flow maneuvers and knowledge of the control inputs (efference copies). It is shown analytically that given a fixed control gain, the stability of a constant divergence control loop only depends on the distance to the approached surface. At close distances, the control loop starts to exhibit self-induced oscillations. The robot can detect these oscillations and hence be aware of the distance to the surface. The proposed stability-based strategy for estimating distances has two main attractive characteristics. First, self-induced oscillations can be detected robustly by the robot and are hardly influenced by wind. Second, the distance can be estimated during a zero divergence maneuver, i.e., around hover. The stability-based strategy is implemented and tested both in simulation and on board a Parrot AR drone 2.0. It is shown that the strategy can be used to: (1) trigger a final approach response during a constant divergence landing with fixed gain, (2) estimate the distance in hover, and (3) estimate distances during an entire landing if the robot uses adaptive gain control to continuously stay on the 'edge of oscillation.'

  2. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss.

    Science.gov (United States)

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960's on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research.

  3. Virtual acoustic displays

    Science.gov (United States)

    Wenzel, Elizabeth M.

    1991-01-01

    A 3D auditory display can potentially enhance information transfer by combining directional and iconic information in a quite naturalistic representation of dynamic objects in the interface. Another aspect of auditory spatial clues is that, in conjunction with other modalities, it can act as a potentiator of information in the display. For example, visual and auditory cues together can reinforce the information content of the display and provide a greater sense of presence or realism in a manner not readily achievable by either modality alone. This phenomenon will be particularly useful in telepresence applications, such as advanced teleconferencing environments, shared electronic workspaces, and monitoring telerobotic activities in remote or hazardous situations. Thus, the combination of direct spatial cues with good principles of iconic design could provide an extremely powerful and information-rich display which is also quite easy to use. An alternative approach, recently developed at ARC, generates externalized, 3D sound cues over headphones in realtime using digital signal processing. Here, the synthesis technique involves the digital generation of stimuli using Head-Related Transfer Functions (HRTF's) measured in the two ear-canals of individual subjects. Other similar approaches include an analog system developed by Loomis, et. al., (1990) and digital systems which make use of transforms derived from normative mannikins and simulations of room acoustics. Such an interface also requires the careful psychophysical evaluation of listener's ability to accurately localize the virtual or synthetic sound sources. From an applied standpoint, measurement of each potential listener's HRTF's may not be possible in practice. For experienced listeners, localization performance was only slightly degraded compared to a subject's inherent ability. Alternatively, even inexperienced listeners may be able to adapt to a particular set of HRTF's as long as they provide adequate

  4. Duality in binocular rivalry: distinct sensitivity of percept sequence and percept duration to imbalance between monocular stimuli.

    Directory of Open Access Journals (Sweden)

    Chen Song

    Full Text Available BACKGROUND: Visual perception is usually stable and accurate. However, when the two eyes are simultaneously presented with conflicting stimuli, perception falls into a sequence of spontaneous alternations, switching between one stimulus and the other every few seconds. Known as binocular rivalry, this visual illusion decouples subjective experience from physical stimulation and provides a unique opportunity to study the neural correlates of consciousness. The temporal properties of this alternating perception have been intensively investigated for decades, yet the relationship between two fundamental properties - the sequence of percepts and the duration of each percept - remains largely unexplored. METHODOLOGY/PRINCIPAL FINDINGS: Here we examine the relationship between the percept sequence and the percept duration by quantifying their sensitivity to the strength imbalance between two monocular stimuli. We found that the percept sequence is far more susceptible to the stimulus imbalance than does the percept duration. The percept sequence always begins with the stronger stimulus, even when the stimulus imbalance is too weak to cause a significant bias in the percept duration. Therefore, introducing a small stimulus imbalance affects the percept sequence, whereas increasing the imbalance affects the percept duration, but not vice versa. To investigate why the percept sequence is so vulnerable to the stimulus imbalance, we further measured the interval between the stimulus onset and the first percept, during which subjects experienced the fusion of two monocular stimuli. We found that this interval is dramatically shortened with increased stimulus imbalance. CONCLUSIONS/SIGNIFICANCE: Our study shows that in binocular rivalry, the strength imblanace between monocular stimuli has a much greater impact on the percept sequence than on the percept duration, and increasing this imbalance can accelerate the process responsible for the percept sequence.

  5. Sensor Fusion of Monocular Cameras and Laser Rangefinders for Line-Based Simultaneous Localization and Mapping (SLAM Tasks in Autonomous Mobile Robots

    Directory of Open Access Journals (Sweden)

    Xinzheng Zhang

    2012-01-01

    Full Text Available This paper presents a sensor fusion strategy applied for Simultaneous Localization and Mapping (SLAM in dynamic environments. The designed approach consists of two features: (i the first one is a fusion module which synthesizes line segments obtained from laser rangefinder and line features extracted from monocular camera. This policy eliminates any pseudo segments that appear from any momentary pause of dynamic objects in laser data. (ii The second characteristic is a modified multi-sensor point estimation fusion SLAM (MPEF-SLAM that incorporates two individual Extended Kalman Filter (EKF based SLAM algorithms: monocular and laser SLAM. The error of the localization in fused SLAM is reduced compared with those of individual SLAM. Additionally, a new data association technique based on the homography transformation matrix is developed for monocular SLAM. This data association method relaxes the pleonastic computation. The experimental results validate the performance of the proposed sensor fusion and data association method.

  6. The Energy Spectrum of Ultra-High-Energy Cosmic Rays Measured by the Telescope Array FADC Fluorescence Detectors in Monocular Mode

    CERN Document Server

    Abu-Zayyad, T; Allen, M; Anderson, R; Azuma, R; Barcikowski, E; Belz, J W; Bergman, D R; Blake, S A; Cady, R; Cheon, B G; Chiba, J; Chikawa, M; Cho, E J; Cho, W R; Fujii, H; Fujii, T; Fukuda, T; Fukushima, M; Hanlon, W; Hayashi, K; Hayashi, Y; Hayashida, N; Hibino, K; Hiyama, K; Honda, K; Iguchi, T; Ikeda, D; Ikuta, K; Inoue, N; Ishii, T; Ishimori, R; Ito, H; Ivanov, D; Iwamoto, S; Jui, C C H; Kadota, K; Kakimoto, F; Kalashev, O; Kanbe, T; Kasahara, K; Kawai, H; Kawakami, S; Kawana, S; Kido, E; Kim, H B; Kim, H K; Kim, J H; Kitamoto, K; Kitamura, S; Kitamura, Y; Kobayashi, K; Kobayashi, Y; Kondo, Y; Kuramoto, K; Kuzmin, V; Kwon, Y J; Lan, J; Lim, S I; Lundquist, J P; Machida, S; Martens, K; Matsuda, T; Matsuura, T; Matsuyama, T; Matthews, J N; Myers, I; Minamino, M; Miyata, K; Murano, Y; Nagataki, S; Nakamura, T; Nam, S W; Nonaka, T; Ogio, S; Ogura, J; Ohnishi, M; Ohoka, H; Oki, K; Oku, D; Okuda, T; Ono, M; Oshima, A; Ozawa, S; Park, I H; Pshirkov, M S; Rodriguez, D C; Roh, S Y; Rubtsov, G; Ryu, D; Sagawa, H; Sakurai, N; Sampson, A L; Scott, L M; Shah, P D; Shibata, F; Shibata, T; Shimodaira, H; Shin, B K; Shin, J I; Shirahama, T; Smith, J D; Sokolsky, P; Sonley, T J; Springer, R W; Stokes, B T; Stratton, S R; Stroman, T A; Suzuki, S; Takahashi, Y; Takeda, M; Taketa, A; Takita, M; Tameda, Y; Tanaka, H; Tanaka, K; Tanaka, M; Thomas, S B; Thomson, G B; Tinyakov, P; Tkachev, I; Tokuno, H; Tomida, T; Troitsky, S; Tsunesada, Y; Tsutsumi, K; Tsuyuguchi, Y; Uchihori, Y; Udo, S; Ukai, H; Vasiloff, G; Wada, Y; Wong, T; Yamakawa, Y; Yamane, R; Yamaoka, H; Yamazaki, K; Yang, J; Yoneda, Y; Yoshida, S; Yoshii, H; Zollinger, R; Zundel, Z

    2013-01-01

    We present a measurement of the energy spectrum of ultra-high-energy cosmic rays performed by the Telescope Array experiment using monocular observations from its two new FADC-based fluorescence detectors. After a short description of the experiment, we describe the data analysis and event reconstruction procedures. Since the aperture of the experiment must be calculated by Monte Carlo simulation, we describe this calculation and the comparisons of simulated and real data used to verify the validity of the aperture calculation. Finally, we present the energy spectrum calculated from the merged monocular data sets of the two FADC-based detectors, and also the combination of this merged spectrum with an independent, previously published monocular spectrum measurement performed by Telescope Array's third fluorescence detector (Abu-Zayyad {\\it et al.}, {Astropart. Phys.} 39 (2012), 109). This combined spectrum corroborates the recently published Telescope Array surface detector spectrum (Abu-Zayyad {\\it et al.}, ...

  7. Sensor fusion of monocular cameras and laser rangefinders for line-based Simultaneous Localization and Mapping (SLAM) tasks in autonomous mobile robots.

    Science.gov (United States)

    Zhang, Xinzheng; Rad, Ahmad B; Wong, Yiu-Kwong

    2012-01-01

    This paper presents a sensor fusion strategy applied for Simultaneous Localization and Mapping (SLAM) in dynamic environments. The designed approach consists of two features: (i) the first one is a fusion module which synthesizes line segments obtained from laser rangefinder and line features extracted from monocular camera. This policy eliminates any pseudo segments that appear from any momentary pause of dynamic objects in laser data. (ii) The second characteristic is a modified multi-sensor point estimation fusion SLAM (MPEF-SLAM) that incorporates two individual Extended Kalman Filter (EKF) based SLAM algorithms: monocular and laser SLAM. The error of the localization in fused SLAM is reduced compared with those of individual SLAM. Additionally, a new data association technique based on the homography transformation matrix is developed for monocular SLAM. This data association method relaxes the pleonastic computation. The experimental results validate the performance of the proposed sensor fusion and data association method.

  8. Vergence and accommodation to multiple-image-plane stereoscopic displays: ``real world'' responses with practical image-plane separations?

    Science.gov (United States)

    MacKenzie, Kevin J.; Dickson, Ruth A.; Watt, Simon J.

    2012-01-01

    Conventional stereoscopic displays present images on a single focal plane. The resulting mismatch between the stimuli to the eyes' focusing response (accommodation) and to convergence causes fatigue and poor stereo performance. One solution is to distribute image intensity across a number of widely spaced image planes--a technique referred to as depth filtering. Previously, we found this elicits accurate, continuous monocular accommodation responses with image-plane separations as large as 1.1 Diopters (D, the reciprocal of distance in meters), suggesting that a small number of image planes could eliminate vergence-accommodation conflicts over a large range of simulated distances. Evidence exists, however, of systematic differences between accommodation responses to binocular and monocular stimuli when the stimulus to accommodation is degraded, or at an incorrect distance. We examined the minimum image-plane spacing required for accurate accommodation to binocular depth-filtered images. We compared accommodation and vergence responses to changes in depth specified by depth filtering, using image-plane separations of 0.6 to 1.2 D, and equivalent real stimuli. Accommodation responses to real and depth-filtered stimuli were equivalent for image-plane separations of ~0.6 to 0.9 D, but differed thereafter. We conclude that depth filtering can be used to precisely match accommodation and vergence demand in a practical stereoscopic display.

  9. Image Descriptors for Displays

    Science.gov (United States)

    1975-03-01

    hypothetical televison display. The viewing distance is 4 picture heights, and the bandwidth limitation has been set by the U.S. Monochrome Standards...significantly influence the power spectrum over most of the video frequency range. A large dc component and a small random component provide another scene... influences . It was Illuminated with natural light to a brightness of over 300 ft-L. The high brightness levels were chosen so as to nearly reproduce the

  10. Refrigerated display cabinets; Butikskyla

    Energy Technology Data Exchange (ETDEWEB)

    Fahlen, Per

    2000-07-01

    This report summarizes experience from SP research and assignments regarding refrigerated transport and storage of food, mainly in the retail sector. It presents the fundamentals of heat and mass transfer in display cabinets with special focus on indirect systems and secondary refrigerants. Moreover, the report includes a brief account of basic food hygiene and the related regulations. The material has been compiled for educational purposes in the Masters program at Chalmers Technical University.

  11. TrkA activation in the rat visual cortex by antirat trkA IgG prevents the effect of monocular deprivation

    OpenAIRE

    Pizzorusso, Tommaso; Berardi, Nicoletta; Rossi, Francesco M.; Viegi, Alessandro; Venstrom, Kristine; Reichardt, Louis F.; Maffei, Lamberto

    1999-01-01

    It has been recently shown that intraventricular injections of nerve growth factor (NGF) prevent the effects of monocular deprivation in the rat. We have tested the localization and the molecular nature of the NGF receptor(s) responsible for this effect by activating cortical trkA receptors in monocularly deprived rats by cortical infusion of a specific agonist of NGF on trkA, the bivalent antirat trkA IgG (RTA-IgG). TrkA protein was detected by immunoblot in the rat visual cortex during the ...

  12. An effective algorithm for monocular video to stereoscopic video transformation based on three-way Iuminance correction%一种基于三阶亮度校正的平面视频转立体视频快速算法

    Institute of Scientific and Technical Information of China (English)

    郑越; 杨淑莹

    2012-01-01

    This paper presents a new effective algorithm for monocular video stereoscopically transformation. With this algo-rithm, the monocular video can be transformed into stereoscopic format in nearly real time, and the output stream can be shown with lifelike three - dimensional effect on any supported display device. The core idea of this algorithm is to extract images from original monocular video, transform the images into stereoscopic ones according to Gaussian distribution, then build a three - level weighted average brightness map from the generated stereoscopic image sequences, correct the image regions respectively in all three level, and finally compose the complete three-dimensional video. After replacing the traditional time - consuming depth image generation algorithm with this one, the transformation performance obtains significantly improvement. Now the images with three - dimensional stereoscopic effect can be shown in real time during the original monocular video live broadcasts.%本文提出了一种新的平面视频转立体视频的快速算法.这种算法能够实时的将平面视频转换成立体视频,并能在三维显示设备上呈现出逼真的立体效果.首先将原始平面视频中的图像按照高斯分布进行立体变换,然后将视频中的图像序列生成加权平均亮度图,并将亮度分为3个等级,分别对这3个等级区域中的图像进行立体校正,最终得到完整的立体视频.我们的方法替代了传统方法中,生成深度图像的步骤,从而大大的提升了运算的速度,能够在原始平面视频的实时播放过程中,直接输出带有立体效果的画面.

  13. The Effect of a Monocular Helmet-Mounted Display on Aircrew Health: A Longitudinal Cohort Study of Apache AH Mk 1 Pilots -(Vision and Handedness)

    Science.gov (United States)

    2015-05-19

    HCVA values were available for 69 control subjects. For the right eye, the initial mean visual acuity was 0.10 logMAR ( Snellen equivalent of 6/7.8...20/26]); the final right eye mean visual acuity was 0.05 logMAR ( Snellen equivalent of 6/6.9 [20/23]). For the left eye, the initial mean visual...acuity was 0.11 logMAR ( Snellen equivalent of 6/8.1 [20/27]); the final left eye mean visual acuity was 0.06 logMAR ( Snellen equivalent of 6/7.2 [20/24

  14. Book Display as Adult Service

    Directory of Open Access Journals (Sweden)

    Matthew S. Moore

    1997-03-01

    Full Text Available 無Book display as an adult service is defined as choosing and positioning adult books from the collection to increase their circulation. The author contrasts bookstore arrangement for sales versus library arrangement for access. The paper considers the library-as-a-whole as a display, examines the right size for an in-library display, and discusses mass displays, end-caps, on-shelf displays, and the Tiffany approach. The author proposes that an effective display depends on an imaginative, unifying theme, and that book displays are part of the joy of libraries.

  15. Handbook of Visual Display Technology

    CERN Document Server

    Cranton, Wayne; Fihn, Mark

    2012-01-01

    The Handbook of Visual Display Technology is a unique work offering a comprehensive description of the science, technology, economic and human interface factors associated with the displays industry. An invaluable compilation of information, the Handbook will serve as a single reference source with expert contributions from over 150 international display professionals and academic researchers. All classes of display device are covered including LCDs, reflective displays, flexible solutions and emissive devices such as OLEDs and plasma displays, with discussion of established principles, emergent technologies, and particular areas of application. The wide-ranging content also encompasses the fundamental science of light and vision, image manipulation, core materials and processing techniques, display driving and metrology.

  16. Ground moving target geo-location from monocular camera mounted on a micro air vehicle

    Science.gov (United States)

    Guo, Li; Ang, Haisong; Zheng, Xiangming

    2011-08-01

    The usual approaches to unmanned air vehicle(UAV)-to-ground target geo-location impose some severe constraints to the system, such as stationary objects, accurate geo-reference terrain database, or ground plane assumption. Micro air vehicle(MAV) works with characteristics including low altitude flight, limited payload and onboard sensors' low accuracy. According to these characteristics, a method is developed to determine the location of ground moving target which imaged from the air using monocular camera equipped on MAV. This method eliminates the requirements for terrain database (elevation maps) and altimeters that can provide MAV's and target's altitude. Instead, the proposed method only requires MAV flight status provided by its inherent onboard navigation system which includes inertial measurement unit(IMU) and global position system(GPS). The key is to get accurate information on the altitude of the ground moving target. First, Optical flow method extracts background static feature points. Setting a local region around the target in the current image, The features which are on the same plane with the target in this region are extracted, and are retained as aided features. Then, inverse-velocity method calculates the location of these points by integrated with aircraft status. The altitude of object, which is calculated by using position information of these aided features, combining with aircraft status and image coordinates, geo-locate the target. Meanwhile, a framework with Bayesian estimator is employed to eliminate noise caused by camera, IMU and GPS. Firstly, an extended Kalman filter(EKF) provides a simultaneous localization and mapping solution for the estimation of aircraft states and aided features location which defines the moving target local environment. Secondly, an unscented transformation(UT) method determines the estimated mean and covariance of target location from aircraft states and aided features location, and then exports them for the

  17. Latest development of display technologies

    Science.gov (United States)

    Gao, Hong-Yue; Yao, Qiu-Xiang; Liu, Pan; Zheng, Zhi-Qiang; Liu, Ji-Cheng; Zheng, Hua-Dong; Zeng, Chao; Yu, Ying-Jie; Sun, Tao; Zeng, Zhen-Xiang

    2016-09-01

    In this review we will focus on recent progress in the field of two-dimensional (2D) and three-dimensional (3D) display technologies. We present the current display materials and their applications, including organic light-emitting diodes (OLEDs), flexible OLEDs quantum dot light emitting diodes (QLEDs), active-matrix organic light emitting diodes (AMOLEDs), electronic paper (E-paper), curved displays, stereoscopic 3D displays, volumetric 3D displays, light field 3D displays, and holographic 3D displays. Conventional 2D display devices, such as liquid crystal devices (LCDs) often result in ambiguity in high-dimensional data images because of lacking true depth information. This review thus provides a detailed description of 3D display technologies.

  18. Optical display for radar sensing

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Willey, Jefferson; Landa, Joseph; Hsieh, Minder; Larsen, Louis V.; Krzywicki, Alan T.; Tran, Binh Q.; Hoekstra, Philip; Dillard, John T.; Krapels, Keith A.; Wardlaw, Michael; Chu, Kai-Dee

    2015-05-01

    Boltzmann headstone S = kB Log W turns out to be the Rosette stone for Greek physics translation optical display of the microwave sensing hieroglyphics. The LHS is the molecular entropy S measuring the degree of uniformity scattering off the sensing cross sections. The RHS is the inverse relationship (equation) predicting the Planck radiation spectral distribution parameterized by the Kelvin temperature T. Use is made of the conservation energy law of the heat capacity of Reservoir (RV) change T Δ S = -ΔE equals to the internal energy change of black box (bb) subsystem. Moreover, an irreversible thermodynamics Δ S > 0 for collision mixing toward totally larger uniformity of heat death, asserted by Boltzmann, that derived the so-called Maxwell-Boltzmann canonical probability. Given the zero boundary condition black box, Planck solved a discrete standing wave eigenstates (equation). Together with the canonical partition function (equation) an average ensemble average of all possible internal energy yielded the celebrated Planck radiation spectral (equation) where the density of states (equation). In summary, given the multispectral sensing data (equation), we applied Lagrange Constraint Neural Network (LCNN) to solve the Blind Sources Separation (BSS) for a set of equivalent bb target temperatures. From the measurements of specific value, slopes and shapes we can fit a set of Kelvin temperatures T's for each bb targets. As a result, we could apply the analytical continuation for each entropy sources along the temperature-unique Planck spectral curves always toward the RGB color temperature display for any sensing probing frequency.

  19. O impacto da visão monocular congênita versus adquirida na qualidade de visão autorrelatada

    Directory of Open Access Journals (Sweden)

    Marcelo Caram Ribeiro Fernandes

    2010-12-01

    Full Text Available Objetivos: Quando a visão de um olho está preservada (visão monocular e há alto risco, baixo prognóstico e/ou recursos limitados para a cirurgia do olho contralateral, não é claro se o beneficio da binocularidade supera o da reorientação para visão monocular. O objetivo é quantificar o impacto da qualidade de visão referida entre a condição binocular e monocular e, nesse último caso, entre congênita e adquirida. Métodos: Pacientes com acuidade visual (AV com AV>0,5 em cada olho foram submetidos ao questionário estruturado de 14 perguntas (VF-14, onde a pontuação de 0 a 100 indica o nível de satisfação do paciente com sua visão, variando de baixo a alto respectivamente. Dados epidemiológicos e pontuações dos quatro grupos foram registrados e submetidos à análise estatística. Resultados: A entrevista pelo VF-14 com 56 indivíduos revelou que a pontuação mais alta foi similar entre os controles e os portadores de visão monocular congênita, e níveis intermediários e baixos foram obtidos por indivíduos com visão monocular adquirida e cegos bilaterais, respectivamente (p<0,001. As atividades mais difíceis para os indivíduos com visão monocular adquirida foram identificar letras pequenas, reconhecer pessoas, distinguir sinais de trânsito e assistir TV. Conclusão: O estudo confirmou que a perda da visão tem impacto desfavorável no desempenho referido das atividades sendo maior na visão monocular adquirida do que na congênita. Os dados sugerem que medidas de reabilitação devem ser consideradas para melhorar a qualidade da visão em doenças intratáveis ou de alto risco ou de baixo prognóstico.

  20. 基于单目视觉的移动机器人伺服镇定控制%Monocular camera-based mobile robot visual servo regulation control

    Institute of Scientific and Technical Information of China (English)

    刘阳; 王忠立; 蔡伯根; 闻映红

    2016-01-01

    To solve the monocular camera‐based mobile robot regulation problem ,the kinematic model in camera coordinate was proposed under the condition of unknown range information ,unknown translation parameter between robot and camera frames ,camera with certain dip angle .A robust and adaptive controller was proposed based on the assumptions above .The controller guaranteed exponen‐tial convergence of the system .The performance of the controller was validated by simulation and ex‐periment result ,showing that the controller could guarantees the robot rapidly and smoothly regulate to desired pose .T he controller is also robust to unknow n parameter .%针对轮式移动机器人的单目视觉伺服镇定问题,在深度信息、机器人坐标系与摄像机坐标系间平移参量未知、摄像头光轴具有固定倾角的情况下,建立了移动机器人在摄像机坐标系下的运动模型。针对该模型提出了一种基于平面单应矩阵分解的鲁棒自适应控制方法,保证了误差的全局指数收敛。仿真和实验结果表明:所设计的控制器可以保证移动机器人指数收敛到期望的位姿,同时所设计的鲁棒自适应控制器对参数不确定性具有一定的鲁棒性。

  1. Binocularity in the little owl, Athene noctua. II. Properties of visually evoked potentials from the Wulst in response to monocular and binocular stimulation with sine wave gratings.

    Science.gov (United States)

    Porciatti, V; Fontanesi, G; Raffaelli, A; Bagnoli, P

    1990-01-01

    Visually evoked potentials (VEPs) have been recorded from the Wulst surface of the little owl, Athene noctua, in response to counterphase-reversal of sinusoidal gratings with different contrast, spatial frequency and mean luminance, presented either monocularly or binocularly. Monocular full-field stimuli presented to either eye evoked VEPs of similar amplitude, waveform and latency. Under binocular viewing, VEPs approximately doubled in amplitude without waveform changes. VEPs with similar characteristics could be obtained in response to stimulation of the contralateral, but not ipsilateral, hemifield. These results suggest that a 50% recrossing occurs in thalamic efferents and that different ipsilateral and contralateral regions converge onto the same Wulst sites. The VEP amplitude progressively decreased with increase of the spatial frequency beyond 2 cycles/degree, and the high spatial frequency cut-off (VEP acuity) was under binocular viewing (8 cycles/degree) higher than under monocular (5 cycles/degree) viewing (200 cd/m2, 45% contrast). The VEP acuity increased with increase in the contrast and decreased with reduction of the mean luminance. The binocular gain in both VEP amplitude and VEP acuity was largest at the lowest luminance levels. Binocular VEP summation occurred in the medium-high contrast range. With decreased contrast, both monocular and binocular VEPs progressively decreased in amplitude and tended to the same contrast threshold. The VEP contrast threshold depended on the spatial frequency (0.6-1.8% in the range 0.12-2 cycles/degree). Binocular VEPs often showed facilitatory interaction (binocular/monocular amplitude ratio greater than 2), but the binocular VEP amplitude did not change either by changing the stimulus orientation (horizontal vs. vertical gratings) or by inducing different retinal disparities.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. Alert Display Distribution (ADD)

    Data.gov (United States)

    Social Security Administration — Repository that contains alerts that will be sent to SSA employees when certain conditions exist, to inform them of work that needs to be done, is being reviewed, or...

  3. Measuring Algorithm for the Distance to a Preceding Vehicle on Curve Road Using On-Board Monocular Camera

    Science.gov (United States)

    Yu, Guizhen; Zhou, Bin; Wang, Yunpeng; Wun, Xinkai; Wang, Pengcheng

    2015-12-01

    Due to more severe challenges of traffic safety problems, the Advanced Driver Assistance Systems (ADAS) has received widespread attention. Measuring the distance to a preceding vehicle is important for ADAS. However, the existing algorithm focuses more on straight road sections than on curve measurements. In this paper, we present a novel measuring algorithm for the distance to a preceding vehicle on a curve road using on-board monocular camera. Firstly, the characteristics of driving on the curve road is analyzed and the recognition of the preceding vehicle road area is proposed. Then, the vehicle detection and distance measuring algorithms are investigated. We have verified these algorithms on real road driving. The experimental results show that this method proposed in the paper can detect the preceding vehicle on curve roads and accurately calculate the longitudinal distance and horizontal distance to the preceding vehicle.

  4. LHCb Event display

    CERN Document Server

    Trisovic, Ana

    2014-01-01

    The LHCb Event Display was made for educational purposes at the European Organization for Nuclear Research, CERN in Geneva, Switzerland. The project was implemented as a stand-alone application using C++ and ROOT, a framework developed by CERN for data analysis. This paper outlines the development and architecture of the application in detail, as well as the motivation for the development and the goals of the exercise. The application focuses on the visualization of events recorded by the LHCb detector, where an event represents a set of charged particle tracks in one proton-proton collision. Every particle track is coloured by its type and can be selected to see its essential information such as mass and momentum. The application allows students to save this information and calculate the invariant mass for any pair of particles. Furthermore, the students can use additional calculating tools in the application and build up a histogram of these invariant masses. The goal for the students is to find a $D^0$ par...

  5. Colorimetry for CRT displays.

    Science.gov (United States)

    Golz, Jürgen; MacLeod, Donald I A

    2003-05-01

    We analyze the sources of error in specifying color in CRT displays. These include errors inherent in the use of the color matching functions of the CIE 1931 standard observer when only colorimetric, not radiometric, calibrations are available. We provide transformation coefficients that prove to correct the deficiencies of this observer very well. We consider four different candidate sets of cone sensitivities. Some of these differ substantially; variation among candidate cone sensitivities exceeds the variation among phosphors. Finally, the effects of the recognized forms of observer variation on the visual responses (cone excitations or cone contrasts) generated by CRT stimuli are investigated and quantitatively specified. Cone pigment polymorphism gives rise to variation of a few per cent in relative excitation by the different phosphors--a variation larger than the errors ensuing from the adoption of the CIE standard observer, though smaller than the differences between some candidate cone sensitivities. Macular pigmentation has a larger influence, affecting mainly responses to the blue phosphor. The estimated combined effect of all sources of observer variation is comparable in magnitude with the largest differences between competing cone sensitivity estimates but is not enough to disrupt very seriously the relation between the L and M cone weights and the isoluminance settings of individual observers. It is also comparable with typical instrumental colorimetric errors, but we discuss these only briefly.

  6. Effects of brief daily periods of unrestricted vision during early monocular form deprivation on development of visual area 2.

    Science.gov (United States)

    Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Harwerth, Ronald S; Smith, Earl L; Chino, Yuzo M

    2011-09-14

    Providing brief daily periods of unrestricted vision during early monocular form deprivation reduces the depth of amblyopia. To gain insights into the neural basis of the beneficial effects of this treatment, the binocular and monocular response properties of neurons were quantitatively analyzed in visual area 2 (V2) of form-deprived macaque monkeys. Beginning at 3 weeks of age, infant monkeys were deprived of clear vision in one eye for 12 hours every day until 21 weeks of age. They received daily periods of unrestricted vision for 0, 1, 2, or 4 hours during the form-deprivation period. After behavioral testing to measure the depth of the resulting amblyopia, microelectrode-recording experiments were conducted in V2. The ocular dominance imbalance away from the affected eye was reduced in the experimental monkeys and was generally proportional to the reduction in the depth of amblyopia in individual monkeys. There were no interocular differences in the spatial properties of V2 neurons in any subject group. However, the binocular disparity sensitivity of V2 neurons was significantly higher and binocular suppression was lower in monkeys that had unrestricted vision. The decrease in ocular dominance imbalance in V2 was the neuronal change most closely associated with the observed reduction in the depth of amblyopia. The results suggest that the degree to which extrastriate neurons can maintain functional connections with the deprived eye (i.e., reducing undersampling for the affected eye) is the most significant factor associated with the beneficial effects of brief periods of unrestricted vision.

  7. Data Display in Qualitative Research

    Directory of Open Access Journals (Sweden)

    Susana Verdinelli PsyD

    2013-02-01

    Full Text Available Visual displays help in the presentation of inferences and conclusions and represent ways of organizing, summarizing, simplifying, or transforming data. Data displays such as matrices and networks are often utilized to enhance data analysis and are more commonly seen in quantitative than in qualitative studies. This study reviewed the data displays used by three prestigious qualitative research journals within a period of three years. The findings include the types of displays used in these qualitative journals, the frequency of use, and the purposes for using visual displays as opposed to presenting data in text.

  8. Perceptual transparency in neon color spreading displays.

    Science.gov (United States)

    Ekroll, Vebjørn; Faul, Franz

    2002-08-01

    In neon color spreading displays, both a color illusion and perceptual transparency can be seen. In this study, we investigated the color conditions for the perception of transparency in such displays. It was found that the data are very well accounted for by a generalization of Metelli's (1970) episcotister model of balanced perceptual transparency to tristimulus values. This additive model correctly predicted which combinations of colors would lead to optimal impressions of transparency. Color combinations deviating slightly from the additive model also looked transparent, but less convincingly so.

  9. Augmenting digital displays with computation

    Science.gov (United States)

    Liu, Jing

    As we inevitably step deeper and deeper into a world connected via the Internet, more and more information will be exchanged digitally. Displays are the interface between digital information and each individual. Naturally, one fundamental goal of displays is to reproduce information as realistically as possible since humans still care a lot about what happens in the real world. Human eyes are the receiving end of such information exchange; therefore it is impossible to study displays without studying the human visual system. In fact, the design of displays is rather closely coupled with what human eyes are capable of perceiving. For example, we are less interested in building displays that emit light in the invisible spectrum. This dissertation explores how we can augment displays with computation, which takes both display hardware and the human visual system into consideration. Four novel projects on display technologies are included in this dissertation: First, we propose a software-based approach to driving multiview autostereoscopic displays. Our display algorithm can dynamically assign views to hardware display zones based on multiple observers' current head positions, substantially reducing crosstalk and stereo inversion. Second, we present a dense projector array that creates a seamless 3D viewing experience for multiple viewers. We smoothly interpolate the set of viewer heights and distances on a per-vertex basis across the arrays field of view, reducing image distortion, crosstalk, and artifacts from tracking errors. Third, we propose a method for high dynamic range display calibration that takes into account the variation of the chrominance error over luminance. We propose a data structure for enabling efficient representation and querying of the calibration function, which also allows user-guided balancing between memory consumption and the amount of computation. Fourth, we present user studies that demonstrate that the ˜ 60 Hz critical flicker fusion

  10. Rapid display of radiographic images

    Science.gov (United States)

    Cox, Jerome R., Jr.; Moore, Stephen M.; Whitman, Robert A.; Blaine, G. James; Jost, R. Gilbert; Karlsson, L. M.; Monsees, Thomas L.; Hassen, Gregory L.; David, Timothy C.

    1991-07-01

    The requirements for the rapid display of radiographic images exceed the capabilities of widely available display, computer, and communications technologies. Computed radiography captures data with a resolution of about four megapixels. Large-format displays are available that can present over four megapixels. One megapixel displays are practical for use in combination with large-format displays and in areas where the viewing task does not require primary diagnosis. This paper describes an electronic radiology system that approximates the highest quality systems, but through the use of several interesting techniques allows the possibility of its widespread installation throughout hospitals. The techniques used can be grouped under three major system concepts: a local, high-speed image server, one or more physician's workstations each with one or more high-performance auxiliary displays specialized to the radiology viewing task, and dedicated, high-speed communication links between the server and the displays. This approach is enhanced by the use of a progressive transmission scheme to decrease the latency for viewing four megapixel images. The system includes an image server with storage for over 600 4-megapixel images and a high-speed link. A subsampled megapixel image is fetched from disk and transmitted to the display in about one second followed by the full resolution 4-megapixel image in about 2.5 seconds. Other system components include a megapixel display with a 6-megapixel display memory space and frame-rate update of image roam, zoom, and contrast. Plans for clinical use are presented.

  11. Military display market segment: helicopters

    Science.gov (United States)

    Desjardins, Daniel D.; Hopper, Darrel G.

    2004-09-01

    The military display market is analyzed in terms of one of its segments: helicopter displays. Parameters requiring special consideration, to include luminance ranges, contrast ratio, viewing angles, and chromaticity coordinates, are examined. Performance requirements for rotary-wing displays relative to several premier applications are summarized. Display sizes having aggregate defense applications of 5,000 units or greater and having DoD applications across 10 or more platforms, are tabulated. The issue of size commonality is addressed where distribution of active area sizes across helicopter platforms, individually, in groups of two through nine, and ten or greater, is illustrated. Rotary-wing displays are also analyzed by technology, where total quantities of such displays are broken out into CRT, LCD, AMLCD, EM, LED, Incandescent, Plasma and TFEL percentages. Custom, versus Rugged commercial, versus commercial off-the-shelf designs are contrasted. High and low information content designs are identified. Displays for several high-profile military helicopter programs are discussed, to include both technical specifications and program history. The military display market study is summarized with breakouts for the helicopter market segment. Our defense-wide study as of March 2004 has documented 1,015,494 direct view and virtual image displays distributed across 1,181 display sizes and 503 weapon systems. Helicopter displays account for 67,472 displays (just 6.6% of DoD total) and comprise 83 sizes (7.0% of total DoD) in 76 platforms (15.1% of total DoD). Some 47.6% of these rotary-wing applications involve low information content displays comprising just a few characters in one color; however, as per fixed-wing aircraft, the predominant instantiation involves higher information content units capable of showing changeable graphics, color and video.

  12. Evaluating motion parallax and stereopsis as depth cues for autostereoscopic displays

    Science.gov (United States)

    Braun, Marius; Leiner, Ulrich; Ruschin, Detlef

    2011-03-01

    The perception of space in the real world is based on multifaceted depth cues, most of them monocular, some binocular. Developing 3D-displays raises the question, which of these depth cues are predominant and should be simulated by computational means in such a panel. Beyond the cues based on image content, such as shadows or patterns, Stereopsis and depth from motion parallax are the most significant mechanisms supporting observers with depth information. We set up a carefully designed test situation, widely excluding undesired other distance hints. Thereafter we conducted a user test to find out, which of these two depth cues is more relevant and whether a combination of both would increase accuracy in a depth estimation task. The trials were conducting utilizing our autostereoscopic "Free2C"-displays, which are capable to detect the user eye position and steer the image lobes dynamically into that direction. At the same time, eye position was used to update the virtual camera's location and thereby offering motion parallax to the observer. As far as we know, this was the first time that such a test has been conducted using an autosteresocopic display without any assistive technologies. Our results showed, in accordance with prior experiments, that both cues are effective, however Stereopsis is by order of magnitude more relevant. Combining both cues improved the precision of distance estimation by another 30-40%.

  13. Liquid crystal displays with high brightness of visualization versus active displays

    Science.gov (United States)

    Olifierczuk, Marek; Zieliński, Jerzy

    2007-05-01

    Nowadays Liquid Crystal Displays (LCD) takes the very important place among different visualization devices. It's are used in many standard applications such as computer or video screens. In May 2006, 100" LCD TV monitor had been shown by LG. But beside of this main direction of display development, very interesting - because of insignificant electro-magnetic disturbances - is the possibility of it's applications in motorization and aviation. An example of it can be a glass cockpit of U2 , Boeing 777 or many different car dashboards. On this field beside LCD we have now many another display technologies, but interesting for us are 3 of them: FEDs (Field Emission Displays), OLEDs (Organic Light Emitting Diode), PLEDs (Polymer Light Emitting Diode). The leading position of LCD is a result of LCD unique advantages of flat form, weight, power consumption, and reliability, higher (than CRT) luminance, luminance uniformity, sunlight readability, wide dimming range, fault tolerance and a large active display area with a small border. The basis of starting our investigation was the comparison of passive LCD and the other technology, which can be theoretically used on motorization and aviation field. The following parameters are compared: contrast ratio, luminance level, temperature stability, life-time, operating temperature range, color performance, and depth, viewing cone, technology maturity, availability and cost. In our work an analysis of Liquid Crystal Displays used in specific applications is done. The possibilities of the applications such a display under high lighting level are presented. The presented results of this analysis are obtained from computer program worked by authors, which makes it possible to calculate the optical parameters of transmissive and reflective LCD working in quasi-real conditions. The base assumption of this program are shown. This program calculate the transmission and reflection coefficient of a display taking into account the

  14. X-1 on display

    Science.gov (United States)

    1949-01-01

    A Bell Aircraft Corporation X-1 series aircraft on display at an Open House at NACA Muroc Flight Test Unit or High-Speed Flight Research Station hangar on South Base of Edwards Air Force Base, California. (The precise date of the photo is uncertain, but it is probably before 1948.) The instrumentation that was carried aboard the aircraft to gather data is on display. The aircraft data was recorded on oscillograph film that was read, calibrated, and converted into meaningful parameters for the engineers to evaluate from each research flight. In the background of the photo are several early U.S. jets. These include several Lockheed P-80 Shooting Stars, which were used as chase planes on X-1 flights; two Bell P-59 Airacomets, the first U.S. jet pursuit aircraft (fighter in later parlance); and a prototype Republic XP-84 Thunderjet. There were five versions of the Bell X-1 rocket-powered research aircraft that flew at the NACA High-Speed Flight Research Station, Edwards, California. The bullet-shaped X-1 aircraft were built by Bell Aircraft Corporation, Buffalo, N.Y. for the U.S. Army Air Forces (after 1947, U.S. Air Force) and the National Advisory Committee for Aeronautics (NACA). The X-1 Program was originally designated the XS-1 for eXperimental Sonic. The X-1's mission was to investigate the transonic speed range (speeds from just below to just above the speed of sound) and, if possible, to break the 'sound barrier.' Three different X-1s were built and designated: X-1-1, X-1-2 (later modified to become the X-1E), and X-1-3. The basic X-1 aircraft were flown by a large number of different pilots from 1946 to 1951. The X-1 Program not only proved that humans could go beyond the speed of sound, it reinforced the understanding that technological barriers could be overcome. The X-1s pioneered many structural and aerodynamic advances including extremely thin, yet extremely strong wing sections; supersonic fuselage configurations; control system requirements; powerplant

  15. Laser illuminated flat panel display

    Energy Technology Data Exchange (ETDEWEB)

    Veligdan, J.T.

    1995-12-31

    A 10 inch laser illuminated flat panel Planar Optic Display (POD) screen has been constructed and tested. This POD screen technology is an entirely new concept in display technology. Although the initial display is flat and made of glass, this technology lends itself to applications where a plastic display might be wrapped around the viewer. The display screen is comprised of hundreds of planar optical waveguides where each glass waveguide represents a vertical line of resolution. A black cladding layer, having a lower index of refraction, is placed between each waveguide layer. Since the cladding makes the screen surface black, the contrast is high. The prototype display is 9 inches wide by 5 inches high and approximately I inch thick. A 3 milliwatt HeNe laser is used as the illumination source and a vector scanning technique is employed.

  16. Miniature information displays: primary applications

    Science.gov (United States)

    Alvelda, Phillip; Lewis, Nancy D.

    1998-04-01

    Positioned to replace current liquid crystal display technology in many applications, miniature information displays have evolved to provide several truly portable platforms for the world's growing personal computing and communication needs. The technology and functionality of handheld computer and communicator systems has finally surpassed many of the standards that were originally established for desktop systems. In these new consumer electronics, performance, display size, packaging, power consumption, and cost have always been limiting factors for fabricating genuinely portable devices. The rapidly growing miniature information display manufacturing industry is making it possible to bring a wide range of highly anticipated new products to new markets.

  17. Maintenance Procedure Display: Head Mounted Display (HMD) Evaluations

    Science.gov (United States)

    Whitmore, Milrian; Litaker, Harry L., Jr.; Solem, Jody A.; Holden, Kritina L.; Hoffman, Ronald R.

    2007-01-01

    A viewgraph presentation describing maintenance procedures for head mounted displays is shown. The topics include: 1) Study Goals; 2) Near Eye Displays (HMDs); 3) Design; 4) Phase I-Evaluation Methods; 5) Phase 1 Results; 6) Improved HMD Mounting; 7) Phase 2 -Evaluation Methods; 8) Phase 2 Preliminary Results; and 9) Next Steps.

  18. Investigating pointing tasks across angularly coupled display areas

    DEFF Research Database (Denmark)

    Hennecke, Fabian; De Luca, Alexander; Nguyen, Ngo Dieu Huong;

    2013-01-01

    Pointing tasks are a crucial part of today’s graphical user interfaces. They are well understood for flat displays and most prominently are modeled through Fitts’ Law. For novel displays (e.g., curved displays with multi-purpose areas), however, it remains unclear whether such models for predicting...... that the target position affects overall pointing speed and offset in both conditions. However, we also found that Fitts’ Law can in fact still be used to predict performance as on flat displays. Our results help designers to optimize user interfaces on angularly coupled displays when pointing tasks are involved....... user performance still hold – in particular when pointing is performed across differently oriented areas. To answer this question, we conducted an experiment on an angularly coupled display – the Curve – with two input conditions: direct touch and indirect mouse pointer. Our findings show...

  19. Flexible daylight memory displays EASL DMD: a new approach toward displays for cockpit and soldier systems

    Science.gov (United States)

    Holter, Borre; Kamfjord, Thor G.; Fossum, Richard; Fagerberg, Ragnar

    2000-08-01

    The Norwegian based company PolyDisplayR ASA, in collaboration with the Norwegian Army Material Command and SINTEF, has refined, developed and shown with color and black/white technology demonstrators an electrically addressed Smectic A reflective LCD technology featuring: (1) Good contrast, all-round viewing angle and readability under all light conditions (no wash-out in direct sunlight). (2) Infinite memory -- image remains without power -- very low power consumption, no or very low radiation ('silent display') and narrow band updating. (3) Clear, sharp and flicker-free images. (4) Large number of gray tones and colors possible. (5) Simple construction and production -- reduced cost, higher yield, more robust and environmentally friendly. (6) Possibility for lighter, more robust and flexible displays based on plastic substrates. The results and future implementation possibilities for cockpit and soldier-system displays are discussed.

  20. Updated defense display market assessment

    Science.gov (United States)

    Desjardins, Daniel D.; Hopper, Darrel G.

    1999-08-01

    This paper addresses the number, function and size of principal military displays and establishes a basis to determine the opportunities for technology insertion in the immediate future and into the next millennium. Principal military displays are defined as those occupying appreciable crewstation real-estate and/or those without which the platform could not carry out its intended mission. DoD 'office' applications are excluded from this study. The military displays market is specified by such parameters as active area and footprint size, and other characteristics such as luminance, gray scale, resolution, angle, color, video capability, and night vision imaging system compatibility. Funded, future acquisitions, planned and predicted crewstation modification kits, and form-fit upgrades are taken into account. This paper provides an overview of the DoD niche market, allowing both government and industry a necessary reference by which to meet DoD requirements for military displays in a timely and cost-effective manner. The aggregate DoD installed base for direct-view and large-area military displays is presently estimated to be in excess of 313,000. Miniature displays are those which must be magnified to be viewed, involve a significantly different manufacturing paradigm and are used in helmet mounted displays and thermal weapon sight applications. Some 114,000 miniature displays are presently included within future weapon system acquisition plans. For vendor production planning purposes it is noted that foreign military sales could substantially increase these quantities. The vanishing vendor syndrome (VVS) for older display technologies continues to be a growing, pervasive problem throughout DoD, which consequently must leverage the more modern, especially flat panel, display technologies being developed to replace older, especially cathode ray tube, technology for civil-commercial markets. Total DoD display needs (FPD, HMD) are some 427,000.

  1. mRNAs coding for neurotransmitter receptors and voltage-gated sodium channels in the adult rabbit visual cortex after monocular deafferentiation

    Science.gov (United States)

    Nguyen, Quoc-Thang; Matute, Carlos; Miledi, Ricardo

    1998-01-01

    It has been postulated that, in the adult visual cortex, visual inputs modulate levels of mRNAs coding for neurotransmitter receptors in an activity-dependent manner. To investigate this possibility, we performed a monocular enucleation in adult rabbits and, 15 days later, collected their left and right visual cortices. Levels of mRNAs coding for voltage-activated sodium channels, and for receptors for kainate/α-amino-3-hydroxy-5-methylisoxazole-4-propionic acid (AMPA), N-methyl-d-aspartate (NMDA), γ-aminobutyric acid (GABA), and glycine were semiquantitatively estimated in the visual cortices ipsilateral and contralateral to the lesion by the Xenopus oocyte/voltage-clamp expression system. This technique also allowed us to study some of the pharmacological and physiological properties of the channels and receptors expressed in the oocytes. In cells injected with mRNA from left or right cortices of monocularly enucleated and control animals, the amplitudes of currents elicited by kainate or AMPA, which reflect the abundance of mRNAs coding for kainate and AMPA receptors, were similar. There was no difference in the sensitivity to kainate and in the voltage dependence of the kainate response. Responses mediated by NMDA, GABA, and glycine were unaffected by monocular enucleation. Sodium channel peak currents, activation, steady-state inactivation, and sensitivity to tetrodotoxin also remained unchanged after the enucleation. Our data show that mRNAs for major neurotransmitter receptors and ion channels in the adult rabbit visual cortex are not obviously modified by monocular deafferentiation. Thus, our results do not support the idea of a widespread dynamic modulation of mRNAs coding for receptors and ion channels by visual activity in the rabbit visual system. PMID:9501250

  2. Three-dimensional display technologies.

    Science.gov (United States)

    Geng, Jason

    2013-01-01

    The physical world around us is three-dimensional (3D), yet traditional display devices can show only two-dimensional (2D) flat images that lack depth (i.e., the third dimension) information. This fundamental restriction greatly limits our ability to perceive and to understand the complexity of real-world objects. Nearly 50% of the capability of the human brain is devoted to processing visual information [Human Anatomy & Physiology (Pearson, 2012)]. Flat images and 2D displays do not harness the brain's power effectively. With rapid advances in the electronics, optics, laser, and photonics fields, true 3D display technologies are making their way into the marketplace. 3D movies, 3D TV, 3D mobile devices, and 3D games have increasingly demanded true 3D display with no eyeglasses (autostereoscopic). Therefore, it would be very beneficial to readers of this journal to have a systematic review of state-of-the-art 3D display technologies.

  3. Metabolic Changes in the Bilateral Visual Cortex of the Monocular Blind Macaque: A Multi-Voxel Proton Magnetic Resonance Spectroscopy Study.

    Science.gov (United States)

    Wu, Lingjie; Tang, Zuohua; Feng, Xiaoyuan; Sun, Xinghuai; Qian, Wen; Wang, Jie; Jin, Lixin; Jiang, Jingxuan; Zhong, Yufeng

    2017-02-01

    The metabolic changes accompanied with adaptive plasticity in the visual cortex after early monocular visual loss were unclear. In this study, we detected the metabolic changes in bilateral visual cortex of normal (group A) and monocular blind macaque (group B) for studying the adaptive plasticity using multi-voxel proton magnetic resonance spectroscopy ((1)H-MRS) at 32 months after right optic nerve transection. Then, we compared the N-Acetyl aspartate (NAA)/Creatine (Cr), myoinositol (Ins)/Cr, choline (Cho)/Cr and Glx (Glutamate + glutamine)/Cr ratios in the visual cortex between two groups, as well as between the left and right visual cortex of group A and B. Compared with group A, in the bilateral visual cortex, a decreased NAA/Cr and Glx/Cr ratios in group B were found, which was more clearly in the right visual cortex; whereas the Ins/Cr and Cho/Cr ratios of group B were increased. All of these findings were further confirmed by immunohistochemical staining. In conclusion, the difference of metabolic ratios can be detected by multi-voxel (1)H-MRS in the visual cortex between groups A and B, which was valuable for investigating the adaptive plasticity of monocular blind macaque.

  4. A Re-Evaluation of Achromatic Spatiotemporal Vision: Nonoriented Filters are Monocular, they Adapt and Can be Used for Decision-Making at High Flicker Speeds

    Directory of Open Access Journals (Sweden)

    Tim S. Meese

    2011-05-01

    Full Text Available Masking, adaptation, and summation paradigms have been used to investigate the characteristics of early spatiotemporal vision. Each has been taken to provide evidence for (i oriented and (ii nonoriented spatial filtering mechanisms. However, subsequent findings suggest that the evidence for nonoriented mechanisms has been misinterpreted: possibly, those experiments revealed the characteristics of suppression (e.g., gain control not excitation, or merely the isotropic subunits of the oriented detecting-mechanisms. To shed light on this, we used all three paradigms to focus on the “high-speed” corner of spatiotemporal vision (low spatial frequency, high temporal frequency where cross-oriented achromatic effects are greatest. We used flickering Gabor patches as targets and a 2IFC procedure for monocular, binocular and dichoptic stimulus presentations. To account for our results we devised a simple model involving an isotropic monocular filter-stage feeding orientation-tuned binocular filters. Both filter stages are adaptable and their outputs are available to the decision-stage following nonlinear contrast transduction. However, the monocular isotropic filters adapt only to high-speed stimuli—consistent with a magnocellular sub-cortical substrate—and benefit decision making only for high-speed stimuli. According to this model, the visual processes revealed by masking, adaptation and summation are related but not identical.

  5. Comparison of the monocular Humphrey visual field and the binocular Humphrey esterman visual field test for driver licensing in glaucoma subjects in Sweden

    Directory of Open Access Journals (Sweden)

    Ayala Marcelo

    2012-08-01

    Full Text Available Abstract Background The purpose of this study was to compare the monocular Humphrey Visual Field (HVF with the binocular Humphrey Esterman Visual Field (HEVF for determining whether subjects suffering from glaucoma fulfilled the new medical requirements for possession of a Swedish driver’s license. Methods HVF SITA Fast 24–2 full threshold (monocularly and HEVF (binocularly were performed consecutively on the same day on 40 subjects with glaucomatous damage of varying degrees in both eyes. Assessment of results was constituted as either “pass” or “fail”, according to the new medical requirements put into effect September 1, 2010 by the Swedish Transport Agency. Results Forty subjects were recruited and participated in the study. Sixteen subjects passed both tests, and sixteen subjects failed both tests. Eight subjects passed the HEFV but failed the HVF. There was a significant difference between HEVF and HVF (χ2, p = 0.004. There were no subjects who passed the HVF, but failed the HEVF. Conclusions The monocular visual field test (HVF gave more specific information about the location and depth of the defects, and therefore is the overwhelming method of choice for use in diagnostics. The binocular visual field test (HEVF seems not be as efficient as the HVF in finding visual field defects in glaucoma subjects, and is therefore doubtful in evaluating visual capabilities in traffic situations.

  6. Comparison of the monocular Humphrey Visual Field and the binocular Humphrey Esterman Visual Field test for driver licensing in glaucoma subjects in Sweden.

    Science.gov (United States)

    Ayala, Marcelo

    2012-08-02

    The purpose of this study was to compare the monocular Humphrey Visual Field (HVF) with the binocular Humphrey Esterman Visual Field (HEVF) for determining whether subjects suffering from glaucoma fulfilled the new medical requirements for possession of a Swedish driver's license. HVF SITA Fast 24-2 full threshold (monocularly) and HEVF (binocularly) were performed consecutively on the same day on 40 subjects with glaucomatous damage of varying degrees in both eyes. Assessment of results was constituted as either "pass" or "fail", according to the new medical requirements put into effect September 1, 2010 by the Swedish Transport Agency. Forty subjects were recruited and participated in the study. Sixteen subjects passed both tests, and sixteen subjects failed both tests. Eight subjects passed the HEFV but failed the HVF. There was a significant difference between HEVF and HVF (χ(2), p = 0.004). There were no subjects who passed the HVF, but failed the HEVF. The monocular visual field test (HVF) gave more specific information about the location and depth of the defects, and therefore is the overwhelming method of choice for use in diagnostics. The binocular visual field test (HEVF) seems not be as efficient as the HVF in finding visual field defects in glaucoma subjects, and is therefore doubtful in evaluating visual capabilities in traffic situations.

  7. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  8. Flat panel display - Impurity doping technology for flat panel displays

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Toshiharu [Advanced Technology Planning, Sumitomo Eaton Nova Corporation, SBS Tower 9F, 10-1, Yoga 4-chome, Setagaya-ku, 158-0097 Tokyo (Japan)]. E-mail: suzuki_tsh@senova.co.jp

    2005-08-01

    Features of the flat panel displays (FPDs) such as liquid crystal display (LCD) and organic light emitting diode (OLED) display, etc. using low temperature poly-Si (LTPS) thin film transistors (TFTs) are briefly reviewed comparing with other FPDs. The requirements for fabricating TFTs used for high performance FPDs and system on glass (SoG) are addressed. This paper focuses on the impurity doping technology, which is one of the key technologies together with crystallization by laser annealing, formation of high quality gate insulator and gate-insulator/poly-Si interface. The issues to be solved in impurity doping technology for state of the art and future TFTs are clarified.

  9. Helmet-mounted display technology on the VISTA NF-16D

    Science.gov (United States)

    Underhill, Gregory P.; Bailey, Randall E.; Markman, Steve

    1997-06-01

    Wright Laboratory's Variable-Stability In-Flight Simulator Test Aircraft (VISTA) NF-16D is the newest in-flight simulator in the USAF inventory. A unique research aircraft, it will perform a multitude of missions: to develop and evaluate flight characteristics of new aircraft that have not yet flown, and perform research in the areas of flying qualities, flight control design, pilot-vehicle interface, weapons and avionics integration, and to train new test pilots. The VISTA upgrade will enhance the simulation fidelity and research capabilities by adding a programmable helmet-mounted display (HMD) and head-up display (HUD) in the front cockpit. The programmable HMD consists of a GEC- Marconi Avionics Viper II Helmet-Mounted Optics Module integrated with a modified Helmet Integrated Systems Limited HGU-86/P helmet, the Honeywell Advanced Metal Tolerant tracker, and a GEC-Mounted Tolerant tracker, and a GEC- Marconi Avionics Programmable Display Generator. This system will provide a real-time programmable HUD and monocular stroke capable HMD in the front cockpit. The HMD system is designed for growth to stroke-on-video, binocular capability. This paper examines some of issues associated with current HMD development, and explains the value of rapid prototyping or 'quick-look' flight testing on the VISTA NF-16D. A brief overview of the VISTA NF-16D and the hardware and software modifications made to incorporate the programmable display system is give, as well as a review of several key decisions that were made in the programmable display system implementation. The system's capabilities and what they mean to potential users and designers are presented, particularly for pilot-vehicle interface research.

  10. The technology of multiuser large display area and auto free-viewing stereoscopic display

    Science.gov (United States)

    Zhao, Tian-Qi; Zhang, He-Ling; Han, Jing

    2010-11-01

    No-glasses optical grating stereoscopic display is one of a chief development of stereoscopic display, but it is always confined by the range of stereoscopic visible and quantity of stereoscopic information and quantity of users. This research use the combination of Fresnel lens array and controllable point lights to output information of the two eyes of different users separately. Combining the technology of eyes-tracking, it can make no-glasses optical grating stereoscopic display be visible in 3D orientation range by multiuser in the condition of two-angle image sources. And it also can be visible in 360° stereoscopic overlook by one user in the condition of multi-angle image sources.

  11. Tone compatibility between HDR displays

    Science.gov (United States)

    Bist, Cambodge; Cozot, Rémi; Madec, Gérard; Ducloux, Xavier

    2016-09-01

    High Dynamic Range (HDR) is the latest trend in television technology and we expect an in ux of HDR capable consumer TVs in the market. Initial HDR consumer displays will operate on a peak brightness of about 500-1000 nits while in the coming years display peak brightness is expected to go beyond 1000 nits. However, professionally graded HDR content can range from 1000 to 4000 nits. As with Standard Dynamic Range (SDR) content, we can expect HDR content to be available in variety of lighting styles such as low key, medium key and high key video. This raises concerns over tone-compatibility between HDR displays especially when adapting to various lighting styles. It is expected that dynamic range adaptation between HDR displays uses similar techniques as found with tone mapping and tone expansion operators. In this paper, we survey simple tone mapping methods of 4000 nits color-graded HDR content for 1000 nits HDR displays. We also investigate tone expansion strategies when HDR content graded in 1000 nits is displayed on 4000 nits HDR monitors. We conclude that the best tone reproduction technique between HDR displays strongly depends on the lighting style of the content.

  12. Safety of the Las Vegas left-turn display.

    Science.gov (United States)

    Ozmen, Ozlem; Tian, Zong Z; Gibby, A Reed

    2014-01-01

    This paper provides a safety evaluation of a special protected/permitted left turn signal control (Las Vegas LT Display) that has been implemented in the urbanized area of Las Vegas, Nevada. The Las Vegas LT Display eliminates the yellow trap condition for leading approach in lead/lag operation. It provides protected only left turns during certain times of day by suppressing the permitted green ball and yellow ball displays. Before and after studies were conducted using the crash data from 10 intersections. Results from the analyses indicated that no obvious safety concerns due to use of the special display. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Peculiarities of vernier monocular and binocular visual acuity in the retinal orthogonal meridians in patients with hypermetropic astigmatism

    Directory of Open Access Journals (Sweden)

    Владимир Александрович Коломиец

    2015-06-01

    Full Text Available It was carried out an examination of meridional vernier visual acuity in 100 patients 7-25 years old with a simple and compound hypermetropic astigmatism and refractive ambyiopia. An astigmatic component of refraction was in range 0,5- 2,5 dptr. Visual acuity on the sighting eyes after correction was 0,9- 1,0, on eyes with amblyopia 0,4 - 0,85 relative units.Methods. Visual acuity was defined by the Landolt rings of Sivtsev table. Vernier visual acuity was defined in seconds of arc from 5 km, using special computer program.Result. It was demonstrated that in patients with the simple hypertropic astigmatism diagnosis of meridional amblyopia can be specified by the comparison of data of monocular and binocular vernier visual acuity in orthogonal meridians of retinas.Conclusions. An effect of the rise of meridional binocular visual acuity in one of meridians and its absence in another one allows define selective meridional disturbances of the visual acuity

  14. Rapid recovery from the effects of early monocular deprivation is enabled by temporary inactivation of the retinas

    Science.gov (United States)

    Fong, Ming-fai; Mitchell, Donald E.; Duffy, Kevin R.; Bear, Mark F.

    2016-01-01

    A half-century of research on the consequences of monocular deprivation (MD) in animals has revealed a great deal about the pathophysiology of amblyopia. MD initiates synaptic changes in the visual cortex that reduce acuity and binocular vision by causing neurons to lose responsiveness to the deprived eye. However, much less is known about how deprivation-induced synaptic modifications can be reversed to restore normal visual function. One theoretically motivated hypothesis is that a period of inactivity can reduce the threshold for synaptic potentiation such that subsequent visual experience promotes synaptic strengthening and increased responsiveness in the visual cortex. Here we have reduced this idea to practice in two species. In young mice, we show that the otherwise stable loss of cortical responsiveness caused by MD is reversed when binocular visual experience follows temporary anesthetic inactivation of the retinas. In 3-mo-old kittens, we show that a severe impairment of visual acuity is also fully reversed by binocular experience following treatment and, further, that prolonged retinal inactivation alone can erase anatomical consequences of MD. We conclude that temporary retinal inactivation represents a highly efficacious means to promote recovery of function. PMID:27856748

  15. A Two-Stage Bayesian Network Method for 3D Human Pose Estimation from Monocular Image Sequences

    Directory of Open Access Journals (Sweden)

    Wang Yuan-Kai

    2010-01-01

    Full Text Available Abstract This paper proposes a novel human motion capture method that locates human body joint position and reconstructs the human pose in 3D space from monocular images. We propose a two-stage framework including 2D and 3D probabilistic graphical models which can solve the occlusion problem for the estimation of human joint positions. The 2D and 3D models adopt directed acyclic structure to avoid error propagation of inference. Image observations corresponding to shape and appearance features of humans are considered as evidence for the inference of 2D joint positions in the 2D model. Both the 2D and 3D models utilize the Expectation Maximization algorithm to learn prior distributions of the models. An annealed Gibbs sampling method is proposed for the two-stage method to inference the maximum posteriori distributions of joint positions. The annealing process can efficiently explore the mode of distributions and find solutions in high-dimensional space. Experiments are conducted on the HumanEva dataset with image sequences of walking motion, which has challenges of occlusion and loss of image observations. Experimental results show that the proposed two-stage approach can efficiently estimate more accurate human poses.

  16. Ten inch Planar Optic Display

    Energy Technology Data Exchange (ETDEWEB)

    Beiser, L. [Beiser (Leo) Inc., Flushing, NY (United States); Veligdan, J. [Brookhaven National Lab., Upton, NY (United States)

    1996-04-01

    A Planar Optic Display (POD) is being built and tested for suitability as a high brightness replacement for the cathode ray tube, (CRT). The POD display technology utilizes a laminated optical waveguide structure which allows a projection type of display to be constructed in a thin (I to 2 inch) housing. Inherent in the optical waveguide is a black cladding matrix which gives the display a black appearance leading to very high contrast. A Digital Micromirror Device, (DMD) from Texas Instruments is used to create video images in conjunction with a 100 milliwatt green solid state laser. An anamorphic optical system is used to inject light into the POD to form a stigmatic image. In addition to the design of the POD screen, we discuss: image formation, image projection, and optical design constraints.

  17. ENERGY STAR Certified Displays - Deprecated

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset is up-to-date but newer better data can be retrieved at: https://data.energystar.gov/dataset/ENERGY-STAR-Certified-Displays/xsyb-v8gs Certified models...

  18. Color speckle in laser displays

    Science.gov (United States)

    Kuroda, Kazuo

    2015-07-01

    At the beginning of this century, lighting technology has been shifted from discharge lamps, fluorescent lamps and electric bulbs to solid-state lighting. Current solid-state lighting is based on the light emitting diodes (LED) technology, but the laser lighting technology is developing rapidly, such as, laser cinema projectors, laser TVs, laser head-up displays, laser head mounted displays, and laser headlamps for motor vehicles. One of the main issues of laser displays is the reduction of speckle noise1). For the monochromatic laser light, speckle is random interference pattern on the image plane (retina for human observer). For laser displays, RGB (red-green-blue) lasers form speckle patterns independently, which results in random distribution of chromaticity, called color speckle2).

  19. Ultraminiature, Micropower Multipurpose Display Project

    Data.gov (United States)

    National Aeronautics and Space Administration — High information content electronic displays remain the most difficult element of the human-machine interface to effectively miniaturize. Mobile applications need a...

  20. Effective color design for displays

    Science.gov (United States)

    MacDonald, Lindsay W.

    2002-06-01

    Visual communication is a key aspect of human-computer interaction, which contributes to the satisfaction of user and application needs. For effective design of presentations on computer displays, color should be used in conjunction with the other visual variables. The general needs of graphic user interfaces are discussed, followed by five specific tasks with differing criteria for display color specification - advertising, text, information, visualization and imaging.

  1. Compact three-dimensional head-mounted display system with Savart plate.

    Science.gov (United States)

    Lee, Chang-Kun; Moon, Seokil; Lee, Seungjae; Yoo, Dongheon; Hong, Jong-Young; Lee, Byoungho

    2016-08-22

    We propose three-dimensional (3D) head-mounted display (HMD) providing multi-focal and wearable functions by using polarization-dependent optical path switching in Savart plate. The multi-focal function is implemented as micro display with high pixel density of 1666 pixels per inches is optically duplicated in longitudinal direction according to the polarization state. The combination of micro display, fast switching polarization rotator and Savart plate retains small form factor suitable for wearable function. The optical aberrations of duplicated panels are investigated by ray tracing according to both wavelength and polarization state. Astigmatism and lateral chromatic aberration of extraordinary wave are compensated by modification of the Savart plate and sub-pixel shifting method, respectively. To verify the feasibility of the proposed system, a prototype of the HMD module for monocular eye is implemented. The module has the compact size of 40 mm by 90 mm by 40 mm and the weight of 131 g with wearable function. The micro display and polarization rotator are synchronized in real-time as 30 Hz and two focal planes are formed at 640 and 900 mm away from eye box, respectively. In experiments, the prototype also provides augmented reality function by combining the optically duplicated panels with a beam splitter. The multi-focal function of the optically duplicated panels without astigmatism and color dispersion compensation is verified. When light field optimization for two additive layers is performed, perspective images are observed, and the integration of real world scene and high quality 3D images is confirmed.

  2. 评价性条件反射效应:无条件刺激的呈现时长、效价强度与关联意识的作用%Evaluative Conditioning: the Role of Display Duration, Valence Intensity and Contingency Awareness

    Institute of Scientific and Technical Information of China (English)

    赵显; 李晔; 刘力; 曾红玲; 郑健

    2012-01-01

    以真实商标图案为条件刺激,情绪图片为无条件刺激,探索无条件刺激呈现时长、效价强度与关联意识对评价性条件反射效应的影响.实验通过结合四图再认测验与基于项目分析,对关联意识的作用进行了详细探讨.结果表明,评价性条件反射效应只发生在无条件刺激长呈现水平与无条件刺激强效价水平;评价性条件反射效应的产生需要基于被试的关联意识.关联意识在呈现时长(效价强度)与评价性条件反射效应间的中介作用不显著.结果不支持评价性条件反射的内隐错误归因机制及联想-命题评价模型的相关论断,部分支持命题性解释模型.%Evaluative conditioning (EC) refers to the change of an attitude toward an affectively neutral object (conditioned stimulus, or CS), following that the object's pairing with another positively or negatively valenced stimulus (unconditioned stimulus, or US). EC is theoretically regarded as an associative learning process in associative-propositional evaluation (APE) model, but many controversies have arisen in empirical studies of EC. Some researchers found EC could not occurred without awareness of CS-US contingencies (supports propositional account), but others didn't. It is also still unclear whether EC relies on much attention or not. Based on propositional account, the purpose of the present study was to investigate the effect of display duration of US, retest the effect of valence intensity, CS-US contingency awareness, and their integrated mechanism on EC, measured by explicit evaluative rating, combined with four-picture recognition test and item-based analyses. The hypotheses were tested in a sample of 122 college students (38 males). In a 2(display duration) x 2 (valence intensity) x 2 (CS type) mixed design, with CS type as a within-group factor (CS+ means that CS was paired with positive pictures and CS- means that CS was paired with negative pictures), the

  3. Factorial Design: Binocular and Monocular Depth Perception in Vertical and Horizontal Stimuli.

    Science.gov (United States)

    Zerbolio, Dominic J., Jr.; Walker, James T.

    1989-01-01

    Describes a factorial experiment that is used as a laboratory exercise in a research methods course. Uses a Howard-Dolman depth perception apparatus, combining the factors of viewing condition and rod orientation to illustrate the nature of an interaction and the necessity of an additional analysis of simple main effects. (Author/LS)

  4. A comparison of low-cost monocular vision techniques for pothole distance estimation

    CSIR Research Space (South Africa)

    Nienaber, S

    2015-12-01

    Full Text Available to these obstacles in the range of 5 m to 30 m. We provide an empirical evaluation of the accuracy of these approaches under various conditions, and make recommendations for when each approach is most suitable. The approaches are based on the pinhole camera model...

  5. Likelihood alarm displays. [for human operator

    Science.gov (United States)

    Sorkin, Robert D.; Kantowitz, Barry H.; Kantowitz, Susan C.

    1988-01-01

    In a likelihood alarm display (LAD) information about event likelihood is computed by an automated monitoring system and encoded into an alerting signal for the human operator. Operator performance within a dual-task paradigm was evaluated with two LADs: a color-coded visual alarm and a linguistically coded synthetic speech alarm. The operator's primary task was one of tracking; the secondary task was to monitor a four-element numerical display and determine whether the data arose from a 'signal' or 'no-signal' condition. A simulated 'intelligent' monitoring system alerted the operator to the likelihood of a signal. The results indicated that (1) automated monitoring systems can improve performance on primary and secondary tasks; (2) LADs can improve the allocation of attention among tasks and provide information integrated into operator decisions; and (3) LADs do not necessarily add to the operator's attentional load.

  6. Two methods to forecast auroral displays

    Science.gov (United States)

    Sigernes, Fred; Dyrland, Margit; Brekke, Pål; Chernouss, Sergey; Lorentzen, Dag Arne; Oksavik, Kjellmar; Sterling Deehr, Charles

    2011-10-01

    This work compares the methods by Starkov (1994a) and Zhang & Paxton (2008), that calculate the size and location of the auroral ovals as a function of planetary Kp index. The ovals are mapped in position and time onto a solar illuminated surface model of the Earth. It displays both the night- and dayside together with the location of the twilight zone as Earth rotates under the ovals. The graphical display serves as a tool to forecast auroral activity based on the predicted value of the Kp index. The forecast is installed as a service at http://kho.unis.no/. The Zhang & Paxton (2008) ovals are wider in latitude than the Starkov (1994a) ovals. The nightside model ovals coincide fairly well in shape for low to normal auroral conditions. The equatorward border of the diffuse aurora is well defined by both methods on the nightside for Kp ≤ 7. The dayside needs further studies in order to conclude.

  7. Two methods to forecast auroral displays

    Directory of Open Access Journals (Sweden)

    Oksavik Kjellmar

    2011-10-01

    Full Text Available This work compares the methods by Starkov (1994a and Zhang & Paxton (2008, that calculate the size and location of the auroral ovals as a function of planetary Kp index. The ovals are mapped in position and time onto a solar illuminated surface model of the Earth. It displays both the night- and dayside together with the location of the twilight zone as Earth rotates under the ovals. The graphical display serves as a tool to forecast auroral activity based on the predicted value of the Kp index. The forecast is installed as a service at http://kho.unis.no/. The Zhang & Paxton (2008 ovals are wider in latitude than the Starkov (1994a ovals. The nightside model ovals coincide fairly well in shape for low to normal auroral conditions. The equatorward border of the diffuse aurora is well defined by both methods on the nightside for Kp ≤ 7. The dayside needs further studies in order to conclude.

  8. The VLT Real Time Display

    Science.gov (United States)

    Herlin, T.; Brighton, A.; Biereichel, P.

    The VLT Real-Time Display (RTD) software was developed in order to support image display in real-time, providing a tool for users to display video like images from a camera or detector as fast as possible on an X-Server. The RTD software is implemented as a package providing a Tcl/Tk image widget written in C++ and an independent image handling library and can be used as a building block, adding display capabilities to dedicated VLT control applications. The RTD widget provides basic image display functionality like: panning, zooming, color scaling, colormaps, intensity changes, pixel query, overlaying of line graphics. A large set of assisting widgets, e.g., colorbar, zoom window, spectrum plot are provided to enable the building of image applications. The support for real-time is provided by an RTD image event mechanism used for camera or detector subsystems to pass images to the RTD widget. Image data are passed efficiently via shared memory. This paper describes the architecture of the RTD software and summarizes the features provided by RTD.

  9. Phosphors for flat panel emissive displays

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, M.T.; Walko, R.J.; Phillips, M.L.F.

    1995-07-01

    An overview of emissive display technologies is presented. Display types briefly described include: cathode ray tubes (CRTs), field emission displays (FEDs), electroluminescent displays (ELDs), and plasma display panels (PDPs). The critical role of phosphors in further development of the latter three flat panel emissive display technologies is outlined. The need for stable, efficient red, green, and blue phosphors for RGB fall color displays is emphasized.

  10. BES Monitoring & Displaying System

    Institute of Scientific and Technical Information of China (English)

    MengWANG; BingyunZHANG; 等

    2001-01-01

    BES1 Monitoring & Displaying System(BESMDS)is projected to monitor and display the running status of DAQ and Slow Control systems of BES through the Web for worldwide accessing.It provides a real-time remote means of monitoring as well as an approach to study the environmental influence upon physical data taking.The system collects real-time data separately from BES online subsystems by network sockets and stores the data into a database.People can access the system through its web site.which retrieves data on request from the database and can display results in dynamically created images.Its web address in http:// besmds,ihep.ac.cn/

  11. Engineering antibodies by yeast display.

    Science.gov (United States)

    Boder, Eric T; Raeeszadeh-Sarmazdeh, Maryam; Price, J Vincent

    2012-10-15

    Since its first application to antibody engineering 15 years ago, yeast display technology has been developed into a highly potent tool for both affinity maturing lead molecules and isolating novel antibodies and antibody-like species. Robust approaches to the creation of diversity, construction of yeast libraries, and library screening or selection have been elaborated, improving the quality of engineered molecules and certainty of success in an antibody engineering campaign and positioning yeast display as one of the premier antibody engineering technologies currently in use. Here, we summarize the history of antibody engineering by yeast surface display, approaches used in its application, and a number of examples highlighting the utility of this method for antibody engineering.

  12. PROGRAMMABLE DISPLAY PUSHBUTTON LEGEND EDITOR

    Science.gov (United States)

    Busquets, A. M.

    1994-01-01

    The Programmable Display Pushbutton (PDP) is a pushbutton device available from Micro Switch which has a programmable 16 x 35 matrix of LEDs on the pushbutton surface. Any desired legends can be displayed on the PDPs, producing user-friendly applications which greatly reduce the need for dedicated manual controls. Because the PDP can interact with the operator, it can call for the correct response before transmitting its next message. It is both a simple manual control and a sophisticated programmable link between the operator and the host system. The Programmable Display Pushbutton Legend Editor, PDPE, is used to create the LED displays for the pushbuttons. PDPE encodes PDP control commands and legend data into message byte strings sent to a Logic Refresh and Control Unit (LRCU). The LRCU serves as the driver for a set of four PDPs. The legend editor (PDPE) transmits to the LRCU user specified commands that control what is displayed on the LED face of the individual pushbuttons. Upon receiving a command, the LRCU transmits an acknowledgement that the message was received and executed successfully. The user then observes the effect of the command on the PDP displays and decides whether or not to send the byte code of the message to a data file so that it may be called by an applications program. The PDPE program is written in FORTRAN for interactive execution. It was developed on a DEC VAX 11/780 under VMS. It has a central memory requirement of approximately 12800 bytes. It requires four Micro Switch PDPs and two RS-232 VAX 11/780 terminal ports. The PDPE program was developed in 1985.

  13. Analysis of an autostereoscopic display: the perceptual range of the three-dimensional visual fields and saliency of static depth cues

    Science.gov (United States)

    Havig, Paul; McIntire, John; McGruder, Rhoshonda

    2006-02-01

    Autostereoscopic displays offer users the unique ability to view 3-dimensional (3D) imagery without special eyewear or headgear. However, the users' head must be within limited "eye boxes" or "viewing zones". Little research has evaluated these viewing zones from a human-in-the-loop, subjective perspective. In the first study, twelve participants evaluated the quality and amount of perceived 3D images. We manipulated distance from observer, viewing angle, and stimuli to characterize the perceptual viewing zones. The data was correlated with objective measures to investigate the amount of concurrence between the objective and subjective measures. In a second study we investigated the benefit of generating stimuli that take advantage of monocular depth cues. The purpose of this study was to determine if one could develop optimal stimuli that would give rise to the greatest 3D effect with off-axis viewing angles. Twelve participants evaluated the quality of depth perception of various stimuli each made up of one monocular depth cue (i.e., linear perspective, occlusion, haze, size, texture, and horizon). Viewing zone analysis is discussed in terms of optimal viewing distances and viewing angles. Stimuli properties are discussed in terms of image complexity and depth cues present.

  14. Analysis the macular ganglion cell complex thickness in monocular strabismic amblyopia patients by Fourier-domain OCT

    Directory of Open Access Journals (Sweden)

    Hong-Wei Deng

    2014-11-01

    Full Text Available AIM: To detect the macular ganglion cell complex thickness in monocular strabismus amblyopia patients, in order to explore the relationship between the degree of amblyopia and retinal ganglion cell complex thickness, and found out whether there is abnormal macular ganglion cell structure in strabismic amblyopia. METHODS: Using a fourier-domain optical coherence tomography(FD-OCTinstrument iVue®(Optovue Inc, Fremont, CA, Macular ganglion cell complex(mGCCthickness was measured and statistical the relation rate with the best vision acuity correction was compared Gman among 26 patients(52 eyesincluded in this study. RESULTS: The mean thickness of the mGCC in macular was investigated into three parts: centrial, inner circle(3mmand outer circle(6mm. The mean thicknesses of mGCC in central, inner and outer circle was 50.74±21.51μm, 101.4±8.51μm, 114.2±9.455μm in the strabismic amblyopia eyes(SAE, and 43.79±11.92μm,92.47±25.01μm, 113.3±12.88μm in the contralateral sound eyes(CSErespectively. There was no statistically significant difference among the eyes(P>0.05. But the best corrected vision acuity had a good correlation rate between mGcc thicknesses, which was better relative for the lower part than the upper part.CONCLUSION:There is a relationship between the amblyopia vision acuity and the mGCC thickness. Although there has not statistically significant difference of the mGCC thickness compared with the SAE and CSE. To measure the macular center mGCC thickness in clinic may understand the degree of amblyopia.

  15. Layer- and cell-type-specific subthreshold and suprathreshold effects of long-term monocular deprivation in rat visual cortex.

    Science.gov (United States)

    Medini, Paolo

    2011-11-23

    Connectivity and dendritic properties are determinants of plasticity that are layer and cell-type specific in the neocortex. However, the impact of experience-dependent plasticity at the level of synaptic inputs and spike outputs remains unclear along vertical cortical microcircuits. Here I compared subthreshold and suprathreshold sensitivity to prolonged monocular deprivation (MD) in rat binocular visual cortex in layer 4 and layer 2/3 pyramids (4Ps and 2/3Ps) and in thick-tufted and nontufted layer 5 pyramids (5TPs and 5NPs), which innervate different extracortical targets. In normal rats, 5TPs and 2/3Ps are the most binocular in terms of synaptic inputs, and 5NPs are the least. Spike responses of all 5TPs were highly binocular, whereas those of 2/3Ps were dominated by either the contralateral or ipsilateral eye. MD dramatically shifted the ocular preference of 2/3Ps and 4Ps, mostly by depressing deprived-eye inputs. Plasticity was profoundly different in layer 5. The subthreshold ocular preference shift was sevenfold smaller in 5TPs because of smaller depression of deprived inputs combined with a generalized loss of responsiveness, and was undetectable in 5NPs. Despite their modest ocular dominance change, spike responses of 5TPs consistently lost their typically high binocularity during MD. The comparison of MD effects on 2/3Ps and 5TPs, the main affected output cells of vertical microcircuits, indicated that subthreshold plasticity is not uniquely determined by the initial degree of input binocularity. The data raise the question of whether 5TPs are driven solely by 2/3Ps during MD. The different suprathreshold plasticity of the two cell populations could underlie distinct functional deficits in amblyopia.

  16. Display standards for commercial flight decks

    Science.gov (United States)

    Lamberth, Larry S.; Penn, Cecil W.

    1994-06-01

    SAE display standards are used as guidelines for certifying commercial airborne electronic displays. The SAE document generation structure and approval process is described. The SAE committees that generate display standards are described. Three SAE documents covering flat panel displays (AS-8034, ARP-4256, and ARP-4260) are discussed with their current status. Head-Up Display documents are also in work.

  17. Display Apple M7649Zm

    CERN Multimedia

    2001-01-01

    It was Designed for the Power Mac G4. This Apple studio display gives you edge-to-edge distortion-free images. With more than 16.7 million colors and 1,280 x 1,024 dpi resolution, you view brilliant and bright images on this Apple 17-inch monitor.

  18. Book Display as Adult Service.

    Science.gov (United States)

    Moore, Matthew S.

    1997-01-01

    Defines book display as an adult service as choosing and positioning adult books from the library collection to increase their circulation. The author contrasts bookstore arrangement for sales versus library arrangement for access, including contrasting missions, genre grouping, weeding, problems, and dimensions. (Author/LRW)

  19. Real Time Sonic Boom Display

    Science.gov (United States)

    Haering, Ed

    2014-01-01

    This presentation will provide general information about sonic boom mitigation technology to the public in order to supply information to potential partners and licensees. The technology is a combination of flight data, atmospheric data and terrain information implemented into a control room real time display for flight planning. This research is currently being performed and as such, any results and conclusions are ongoing.

  20. Graphics Display of Foreign Scripts.

    Science.gov (United States)

    Abercrombie, John R.

    1987-01-01

    Describes Graphics Project for Foreign Language Learning at the University of Pennsylvania, which has developed ways of displaying foreign scripts on microcomputers. Character design on computer screens is explained; software for graphics, printing, and language instruction is discussed; and a text editor is described that corrects optically…

  1. Verbal Modification via Visual Display

    Science.gov (United States)

    Richmond, Edmun B.; Wallace-Childers, La Donna

    1977-01-01

    The inability of foreign language students to produce acceptable approximations of new vowel sounds initiated a study to devise a real-time visual display system whereby the students could match vowel production to a visual pedagogical model. The system used amateur radio equipment and a standard oscilloscope. (CHK)

  2. Colour displays for categorical images

    NARCIS (Netherlands)

    Glasbey, C.; Heijden, van der G.W.A.M.; Toh, V.F.K.; Gray, A.J.

    2007-01-01

    We propose a method for identifying a set of colours for displaying 2D and 3D categorical images when the categories are unordered labels. The principle is to find maximally distinct sets of colours. We either generate colours sequentially, to maximize the dissimilarity or distance between a new col

  3. Autostereoscopic display with eye tracking

    Science.gov (United States)

    Tomono, Takao; Hoon, Kyung; Ha, Yong Soo; Kim, Sung-Sik; Son, Jung-Young

    2002-05-01

    Auto-stereoscopic 21-inch display with eye tracking having wide viewing zone and bright image was fabricated. The image of display is projected to retinal through several optical components. We calculated optical system for wider viewing zone by using Inverse-Ray Trace Method. The viewing zone of first model is 155mm (theoretical value: 161mm). We could widen viewing zone by controlling paraxial radius of curvature of spherical mirror, the distance between lenses and so on. The viewing zone of second model is 208mm. We used two spherical mirrors to obtain twice brightness. We applied eye-tracking system to the display system. Eye recognition is based on neural network card based on ZICS technology. We fabricated Auto-stereoscopic 21-inch display with eye tracking. We measured viewing zone based on illumination area. The viewing zone was 206mm, which was close to theoretical value. We could get twice brightness also. We could see 3D image according to position without headgear.

  4. Crystal ball single event display

    Energy Technology Data Exchange (ETDEWEB)

    Grosnick, D.; Gibson, A. [Valparaiso Univ., IN (United States). Dept. of Physics and Astronomy; Allgower, C. [Argonne National Lab., IL (United States). High Energy Physics Div.; Alyea, J. [Valparaiso Univ., IN (United States). Dept. of Physics and Astronomy]|[Argonne National Lab., IL (United States). High Energy Physics Div.

    1997-10-15

    The Single Event Display (SED) is a routine that is designed to provide information graphically about a triggered event within the Crystal Ball. The SED is written entirely in FORTRAN and uses the CERN-based HICZ graphing package. The primary display shows the amount of energy deposited in each of the NaI crystals on a Mercator-like projection of the crystals. Ten different shades and colors correspond to varying amounts of energy deposited within a crystal. Information about energy clusters is displayed on the crystal map by outlining in red the thirteen (or twelve) crystals contained within a cluster and assigning each cluster a number. Additional information about energy clusters is provided in a series of boxes containing useful data about the energy distribution among the crystals within the cluster. Other information shown on the event display include the event trigger type and data about {pi}{sup o}`s and {eta}`s formed from pairs of clusters as found by the analyzer. A description of the major features is given, along with some information on how to install the SED into the analyzer.

  5. GridOrbit public display

    DEFF Research Database (Denmark)

    Ramos, Juan David Hincapie; Tabard, Aurélien; Bardram, Jakob

    2010-01-01

    We introduce GridOrbit, a public awareness display that visualizes the activity of a community grid used in a biology laboratory. This community grid executes bioin-formatics algorithms and relies on users to donate CPU cycles to the grid. The goal of GridOrbit is to create a shared awareness about...

  6. Information retrieval and display system

    Science.gov (United States)

    Groover, J. L.; King, W. L.

    1977-01-01

    Versatile command-driven data management system offers users, through simplified command language, a means of storing and searching data files, sorting data files into specified orders, performing simple or complex computations, effecting file updates, and printing or displaying output data. Commands are simple to use and flexible enough to meet most data management requirements.

  7. Display Sharing: An Alternative Paradigm

    Science.gov (United States)

    Brown, Michael A.

    2010-01-01

    The current Johnson Space Center (JSC) Mission Control Center (MCC) Video Transport System (VTS) provides flight controllers and management the ability to meld raw video from various sources with telemetry to improve situational awareness. However, maintaining a separate infrastructure for video delivery and integration of video content with data adds significant complexity and cost to the system. When considering alternative architectures for a VTS, the current system's ability to share specific computer displays in their entirety to other locations, such as large projector systems, flight control rooms, and back supporting rooms throughout the facilities and centers must be incorporated into any new architecture. Internet Protocol (IP)-based systems also support video delivery and integration. IP-based systems generally have an advantage in terms of cost and maintainability. Although IP-based systems are versatile, the task of sharing a computer display from one workstation to another can be time consuming for an end-user and inconvenient to administer at a system level. The objective of this paper is to present a prototype display sharing enterprise solution. Display sharing is a system which delivers image sharing across the LAN while simultaneously managing bandwidth, supporting encryption, enabling recovery and resynchronization following a loss of signal, and, minimizing latency. Additional critical elements will include image scaling support, multi -sharing, ease of initial integration and configuration, integration with desktop window managers, collaboration tools, host and recipient controls. This goal of this paper is to summarize the various elements of an IP-based display sharing system that can be used in today's control center environment.

  8. Manual control displays for a four-dimensional landing approach

    Science.gov (United States)

    Silverthorn, J. T.; Swaim, R. L.

    1975-01-01

    Six instrument rated pilots flew a STOL fixed base simulator to study the effectiveness of three displays for a four dimensional approach. The three examined displays were a digital readout of forward position error, a digital speed command, and an analog display showing forward position error and error prediction. A flight director was used in all conditions. All test runs were for a typical four dimensional approach in moderate turbulence that included a change in commanded ground speed, a change in flight path angle, and two standard rate sixty degree turns. Use of the digital forward position error display resulted in large overshoot in the forward position error. Some type of lead (rate or prediction information) was shown to be needed. The best overall performance was obtained using the speed command display. It was demonstrated that curved approaches can be flown with relative ease.

  9. SureTrak Probability of Impact Display

    Science.gov (United States)

    Elliott, John

    2012-01-01

    The SureTrak Probability of Impact Display software was developed for use during rocket launch operations. The software displays probability of impact information for each ship near the hazardous area during the time immediately preceding the launch of an unguided vehicle. Wallops range safety officers need to be sure that the risk to humans is below a certain threshold during each use of the Wallops Flight Facility Launch Range. Under the variable conditions that can exist at launch time, the decision to launch must be made in a timely manner to ensure a successful mission while not exceeding those risk criteria. Range safety officers need a tool that can give them the needed probability of impact information quickly, and in a format that is clearly understandable. This application is meant to fill that need. The software is a reuse of part of software developed for an earlier project: Ship Surveillance Software System (S4). The S4 project was written in C++ using Microsoft Visual Studio 6. The data structures and dialog templates from it were copied into a new application that calls the implementation of the algorithms from S4 and displays the results as needed. In the S4 software, the list of ships in the area was received from one local radar interface and from operators who entered the ship information manually. The SureTrak Probability of Impact Display application receives ship data from two local radars as well as the SureTrak system, eliminating the need for manual data entry.

  10. Robust 3D Object Tracking from Monocular Images using Stable Parts.

    Science.gov (United States)

    Crivellaro, Alberto; Rad, Mahdi; Verdie, Yannick; Yi, Kwang Moo; Fua, Pascal; Lepetit, Vincent

    2017-05-26

    We present an algorithm for estimating the pose of a rigid object in real-time under challenging conditions. Our method effectively handles poorly textured objects in cluttered, changing environments, even when their appearance is corrupted by large occlusions, and it relies on grayscale images to handle metallic environments on which depth cameras would fail. As a result, our method is suitable for practical Augmented Reality applications including industrial environments. At the core of our approach is a novel representation for the 3D pose of object parts: We predict the 3D pose of each part in the form of the 2D projections of a few control points. The advantages of this representation is three-fold: We can predict the 3D pose of the object even when only one part is visible; when several parts are visible, we can easily combine them to compute a better pose of the object; the 3D pose we obtain is usually very accurate, even when only few parts are visible. We show how to use this representation in a robust 3D tracking framework. In addition to extensive comparisons with the state-of-the-art, we demonstrate our method on a practical Augmented Reality application for maintenance assistance in the ATLAS particle detector at CERN.

  11. Toward 3D Reconstruction of Outdoor Scenes Using an MMW Radar and a Monocular Vision Sensor

    Directory of Open Access Journals (Sweden)

    Ghina El Natour

    2015-10-01

    Full Text Available In this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors’ coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors’ geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor, which is not always met in the state of the art for outdoor scene reconstruction. The proposed methods have been validated with synthetic and real data.

  12. Driver performance-based assessment of thermal display degradation effects

    Science.gov (United States)

    Ruffner, John W.; Massimi, Michael S.; Choi, Yoon S.; Ferrett, Donald A.

    1998-07-01

    The Driver's Vision Enhancer (DVE) is a thermal sensor and display combination currently being procured for use in U.S. Army combat and tactical wheeled vehicles. During the DVE production process, a given number of sensor or display pixels may either vary from the desired luminance values (nonuniform) or be inactive (nonresponsive). The amount and distribution of pixel luminance nonuniformity (NU) and nonresponsivity (NR) allowable in production DVEs is a significant cost factor. No driver performance-based criteria exist for determining the maximum amount of allowable NU and NR. For safety reasons, these characteristics are specified conservatively. This paper describes an experiment to assess the effects of different levels of display NU and NR on Army drivers' ability to identify scene features and obstacles using a simulated DVE display and videotaped driving scenarios. Baseline, NU, and NR display conditions were simulated using real-time image processing techniques and a computer graphics workstation. The results indicate that there is a small, but statistically insignificant decrease in identification performance with the NU conditions tested. The pattern of the performance-based results is consistent with drivers' subjective assessments of display adequacy. The implications of the results for specifying NU and NR criteria for the DVE display are discussed.

  13. Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision

    Science.gov (United States)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-01-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  14. Visual display requirements: on standards and their users

    Science.gov (United States)

    van Nes, Floris L.

    2005-01-01

    A new ISO standard for visual displays: "Ergonomic requirements and measurement techniques for electronic visual displays" is soon to be released as a Draft International Standard. The core of the new standard is the part with generic ergonomic requirements for visual displays. Three parts of the standard describe three types of measurements: electro-optical ones, to be used in general; user performance test methods, for innovative displays for which no electro-optical methods exist; and field assessment methods, to be used outside of the laboratory, under the conditions of use at the workplace. The last part of the standard describes five compliance routes and procedures for five different display technologies and contexts of use. A number of choices and problems that standard writers have to face are mentioned. Should a visual display standard be written primarily for young users, with mostly a high visual acuity and in possession of their full accommodative power, wanting to use tiny hand-held displays featuring very small characters ? Or should the standard be focused on the elderly, with their reduction in visual faculties, barring the use of small characters that may irritate or disable such older users ? The question how to put human factors principles in standards sometimes seems a battle between idealists and realists. It therefore is important to strike a balance between different attitudes, backgrounds and interests in a standards writing committee - as indeed happens in ISO/TC 159/SC 4/WG 2, "Visual Display Requirements". The author is convener of this Working Group.

  15. Displays for future intermediate UAV

    Science.gov (United States)

    Desjardins, Daniel; Metzler, James; Blakesley, David; Rister, Courtney; Nuhu, Abdul-Razak

    2008-04-01

    The Dedicated Autonomous Extended Duration Airborne Long-range Utility System (DAEDALUS) is a prototype Unmanned Aerial Vehicle (UAV) that won the 2007 AFRL Commander's Challenge. The purpose of the Commander's Challenge was to find an innovative solution to urgent warfighter needs by designing a UAV with increased persistence for tactical employment of sensors and communication systems. DAEDALUS was chosen as a winning prototype by AFRL, AFMC and SECAF. Follow-on units are intended to fill an intermediate role between currently fielded Tier I and Tier II UAV's. The UAV design discussed in this paper, including sensors and displays, will enter Phase II for Rapid Prototype Development with the intent of developing the design for eventual production. This paper will discuss the DAEDALUS UAV prototype system, with particular focus on its communications, to include the infrared sensor and electro-optical camera, but also displays, specifically man-portable.

  16. Characterization of the rotating display.

    Science.gov (United States)

    Keyes, J W; Fahey, F H; Harkness, B A; Eggli, D F; Balseiro, J; Ziessman, H A

    1988-09-01

    The rotating display is a useful method for reviewing single photon emission computed tomography (SPECT) data. This study evaluated the requirements for a subjectively pleasing and useful implementation of this technique. Twelve SPECT data sets were modified and viewed by several observers who recorded the minimum framing rates for apparent smooth rotation, 3D effect, effects of image size, and other parameters. The results showed that a minimum of 16 frames was needed for a useful display. Smaller image sizes and more frames were preferred. The recommended minimal framing rate for a 64-frame study is 16-17 frames per second and for a 32-frame study, 12-13 frames per second. Other enhancements also were useful.

  17. Interactive display of polygonal data

    Energy Technology Data Exchange (ETDEWEB)

    Wood, P.M.

    1977-10-01

    Interactive computer graphics is an excellent approach to many types of applications. It is an exciting method of doing geographic analysis when desiring to rapidly examine existing geographically related data or to display specially prepared data and base maps for publication. One such program is the interactive thematic mapping system called CARTE, which combines polygonal base maps with statistical data to produce shaded maps using a variety of shading symbolisms on a variety of output devices. A polygonal base map is one where geographic entities are described by points, lines, or polygons. It is combined with geocoded data to produce special subject or thematic maps. Shading symbolisms include texture shading for areas, varying widths for lines, and scaled symbols for points. Output devices include refresh and storage CRTs and auxiliary Calcomp or COM hardcopy. The system is designed to aid in the quick display of spatial data and in detailed map design.

  18. Game engines and immersive displays

    Science.gov (United States)

    Chang, Benjamin; Destefano, Marc

    2014-02-01

    While virtual reality and digital games share many core technologies, the programming environments, toolkits, and workflows for developing games and VR environments are often distinct. VR toolkits designed for applications in visualization and simulation often have a different feature set or design philosophy than game engines, while popular game engines often lack support for VR hardware. Extending a game engine to support systems such as the CAVE gives developers a unified development environment and the ability to easily port projects, but involves challenges beyond just adding stereo 3D visuals. In this paper we outline the issues involved in adapting a game engine for use with an immersive display system including stereoscopy, tracking, and clustering, and present example implementation details using Unity3D. We discuss application development and workflow approaches including camera management, rendering synchronization, GUI design, and issues specific to Unity3D, and present examples of projects created for a multi-wall, clustered, stereoscopic display.

  19. Historical halo displays as past weather indicator

    Science.gov (United States)

    Neuhäuser, Dagmar; Neuhäuser, Ralph

    2017-04-01

    Certain halo displays like the 22° circle were known to indicate specific weather pattern since millennia - as specified in Babylonian omina, Aristotle's Meteorology, farmers' weather lore, etc. Today, it is known that halo phenomena are due to refraction and reflection of sun and moon light in ice crystals in cirrus and cirrostratus, so that halo observations do indicate atmospheric conditions like temperature, humidity, pressure etc. in a few km height. The Astronomical Diaries of Babylonia have recorded both halo phenomena (circles, parhelia, etc.) and weather conditions (rain, clouds, etc.), so that we can use them to show statistically, whether, which and how fast halo phenomena are related to weather - for the last few centuries BC for Babylonia. We can then also compare the observations of Babylonian priests in the given BC epoch (without air and light pollution) with the last few decades of the modern epoch (with air and light pollution), where amateur halo observers have systematically recorded such phenomena (in Europe). Weather and climate are known to be partly driven by solar activity. Hence, one could also consider whether there is an indirect relation between halo displays as weather proxy and aurorae as solar activity proxy - if low solar activity leads to low pressure systems, one could expect more halos, preliminary studies show such a hint. For the last few decades, we have many halo observations, satellite imaging of the aurora oval, and many data on solar activity. A statistically sufficient amount of aurora and halo observations should be available for the historic time to investigate such a possible connection: halos were recorded very often in antiquity and the medieval times (as found in chronicles etc.), and modern scholarly catalogs of aurorae also often contain unrecognized halo displays.

  20. Proof nets for display logic

    CERN Document Server

    Moot, Richard

    2007-01-01

    This paper explores several extensions of proof nets for the Lambek calculus in order to handle the different connectives of display logic in a natural way. The new proof net calculus handles some recent additions to the Lambek vocabulary such as Galois connections and Grishin interactions. It concludes with an exploration of the generative capacity of the Lambek-Grishin calculus, presenting an embedding of lexicalized tree adjoining grammars into the Lambek-Grishin calculus.

  1. Modern Display Technologies and Applications

    Science.gov (United States)

    1982-01-01

    conventional tubes, LSI circuitry offers the possibility of correcting some of the deficiencies in electron-optic perform- ance and may lead to acceptable...certain ceramic materials such as PLZT (lead lanthanum zirconate titanate) can be utilized for display applications. PLZT is transparent in the visible...consuming power (3.8.12). 3.8.4.2 State of development, Magnetic particles have been made of polyethylene with powdered Strontium ferrite as a filler

  2. Multiview synthesis for autostereoscopic displays

    Science.gov (United States)

    Dane, Gökçe.; Bhaskaran, Vasudev

    2013-09-01

    Autostereoscopic (AS) displays spatially multiplex multiple views, providing a more immersive experience by enabling users to view the content from different angles without the need of 3D glasses. Multiple views could be captured from multiple cameras at different orientations, however this could be expensive, time consuming and not applicable to some applications. The goal of multiview synthesis in this paper is to generate multiple views from a stereo image pair and disparity map by using various video processing techniques including depth/disparity map processing, initial view interpolation, inpainting and post-processing. We specifically emphasize the need for disparity processing when there is no depth information is available that is associated with the 2D data and we propose a segmentation based disparity processing algorithm to improve disparity map. Furthermore we extend the texture based 2D inpainting algorithm to 3D and further improve the hole-filling performance of view synthesis. The benefit of each step of the proposed algorithm is demonstrated with comparison to state of the art algorithms in terms of visual quality and PSNR metric. Our system is evaluated in an end-to-end multi view synthesis framework where only stereo image pair is provided as input to the system and 8 views are outputted and displayed in 8-view Alioscopy AS display.

  3. Percepção monocular da profundidade ou relevo na ilusão da máscara côncava na esquizofrenia

    Directory of Open Access Journals (Sweden)

    Arthur Alves

    2014-03-01

    Full Text Available Este trabalho foi desenvolvido com o propósito de investigar a percepção monocular da profundidade ou relevo da máscara côncava por 29 indivíduos saudáveis, sete indivíduos com esquizofrenia sob uso de antipsicótico por um período inferior ou igual a quatro semanas e 29 sob uso de antipsicótico por um período superior a quatro semanas. Os três grupos classificaram o reverso de uma máscara policromada em duas situações de iluminação, por cima e por baixo. Os resultados indicaram que a maioria dos indivíduos com esquizofrenia inverteu a profundidade da máscara côncava na condição de observação monocular e perceberam-na como convexa, sendo, portanto, suscetíveis à ilusão da máscara côncava. Os indivíduos com esquizofrenia sob uso de medicação antipsicótica pelo período superior a quatro semanas estimaram a convexidade da máscara côncava iluminada por cima em menor comprimento comparados aos indivíduos saudáveis.

  4. Gestures to Intuitively Control Large Displays

    NARCIS (Netherlands)

    Fikkert, F.W.; Vet, van der P.E.; Rauwerda, H.; Breit, T.; Nijholt, A.; Sales Dias, M.; Gibet, S.; Wanderley, M.W.; Bastos, R.

    2009-01-01

    Large displays are highly suited to support discussions in empirical science. Such displays can display project results on a large digital surface to feed the discussion. This paper describes our approach to closely involve multidisciplinary omics scientists in the design of an intuitive display con

  5. 27 CFR 6.55 - Display service.

    Science.gov (United States)

    2010-04-01

    ... Distribution Service § 6.55 Display service. Industry member reimbursements to retailers for setting up product or other displays constitutes paying the retailer for rendering a display service within the meaning... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Display service. 6.55...

  6. Review of Defense Display Research Programs

    Science.gov (United States)

    2001-01-01

    Programs Flat Panel Autostereoscopic N-perspective 3D High Definition DMD Digital Projector Light Piping & Quantum Cavity Displays Solid State Laser...Megapixel Displays • Size Commonality • 67 % Weight Reduction • > 200 sq. in. per Display 20-20 Vision Simulators True 3D , sparse symbols Foldable Display...megapixel 2D and True 3D Display Technology 25M & T3D FY02-FY06 New service thrusts

  7. Recent Trend in Development of Olfactory Displays

    Science.gov (United States)

    Yanagida, Yasuyuki

    An olfactory display is a device that generates scented air with desired concentration of aroma, and delivers it to the user's olfactory organ. In this article, the nature of olfaction is briefly described from the view point of how to configure olfactory displays. Next, component technologies to compose olfactory displays, i.e., making scents and delivering scents, are categorized. Several existing olfactory display systems are introduced to show the current status of research and development of olfactory displays.

  8. A comparison of visual and kinesthetic-tactual displays for compensatory tracking

    Science.gov (United States)

    Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.

    1983-01-01

    Recent research on manual tracking with a kinesthetic-tactual (KT) display suggests that under certain conditions it can be an effective alternative or supplement to visual displays. In order to understand better how KT tracking compares with visual tracking, both a critical tracking and stationary single-axis tracking tasks were conducted with and without velocity quickening. In the critical tracking task, the visual displays were superior, however, the quickened KT display was approximately equal to the unquickened visual display. In stationary tracking tasks, subjects adopted lag equalization with the quickened KT and visual displays, and mean-squared error scores were approximately equal. With the unquickened displays, subjects adopted lag-lead equalization, and the visual displays were superior. This superiority was partly due to the servomotor lag in the implementation of the KT display and partly due to modality differences.

  9. Simulated monitor display for CCTV

    Energy Technology Data Exchange (ETDEWEB)

    Steele, B.J.

    1982-01-01

    Two computer programs have been developed which generate a two-dimensional graphic perspective of the video output produced by a Closed Circuit Television (CCTV) camera. Both programs were primarily written to produce a graphic display simulating the field-of-view (FOV) of a perimeter assessment system as seen on a CCTV monitor. The original program was developed for use on a Tektronix 4054 desktop computer; however, the usefulness of this graphic display program led to the development of a similar program for a Hewlett-Packard 9845B desktop computer. After entry of various input parameters, such as, camera lens and orientation, the programs automatically calculate and graphically plot the locations of various items, e.g., fences, an assessment zone, running men, and intrusion detection sensors. Numerous special effects can be generated to simulate such things as roads, interior walls, or sides of buildings. Other objects can be digitized and entered into permanent memory similar to the running men. With this type of simulated monitor perspective, proposed camera locations with respect to fences and a particular assessment zone can be rapidly evaluated without the costly time delays and expenditures associated with field evaluation.

  10. A comparison of tracking with visual and kinesthetic-tactual displays

    Science.gov (United States)

    Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.

    1981-01-01

    Recent research on manual tracking with a kinesthetic-tactual (KT) display suggests that under appropriate conditions it may be an effective means of providing visual workload relief. In order to better understand how KT tracking differs from visual tracking, both a critical tracking task and stationary single-axis tracking tasks were conducted with and without velocity quickening. On the critical tracking task, the visual displays were superior; however, the KT quickened display was approximately equal to the visual unquickened display. Mean squared error scores in the stationary tracking tasks for the visual and KT displays were approximately equal in the quickened conditions, and the describing functions were very similar. In the unquickened conditions, the visual display was superior. Subjects using the unquickened KT display exhibited a low frequency lead-lag that may be related to sensory adaptation.

  11. Accelerometer Method and Apparatus for Integral Display and Control Functions

    Science.gov (United States)

    Bozeman, Richard J., Jr. (Inventor)

    1998-01-01

    Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto is discussed. An accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.

  12. Colorful displays signal male quality in a tropical anole lizard

    Science.gov (United States)

    Cook, Ellee G.; Murphy, Troy G.; Johnson, Michele A.

    2013-10-01

    Parasites influence colorful ornaments and their behavioral display in many animal hosts. Because coloration and display behavior are often critical components of communication, variation in these traits may have important implications for individual fitness, yet it remains unclear whether such traits are signals of quality in many taxa. We investigated the association between ectoparasitic mite load and the color and behavioral use of the throat fan (dewlap) by male Anolis brevirostris lizards. We found that heavily parasitized lizards exhibited lower body condition, duller dewlaps, and less frequent dewlap displays than less parasitized individuals. Our results thus suggest that highly parasitized individuals invest less in both ornamental color and behavioral display of that color. Because the two components of the signal simultaneously provide information on male quality, this study provides novel support for the long-standing hypothesis that colorful traits may function as social or sexual signals in reptiles.

  13. Vergence and accommodation to multiple-image-plane stereoscopic displays: 'Real world' responses with practical image-plane separations?

    Science.gov (United States)

    MacKenzie, K. J.; Dickson, R. A.; Watt, S. J.

    2011-03-01

    Conventional stereoscopic displays present images on a single focal plane. The resulting mismatch between the stimuli to the eyes' focusing response (accommodation) and to convergence causes fatigue and poor stereo performance. One promising solution is to distribute image intensity across a number of relatively widely spaced image planes - a technique referred to as depth filtering. Previously, we found this elicits accurate, continuous monocular accommodation responses with image-plane separations as large as 1.1 Diopters, suggesting that a relatively small (i.e. practical) number of image planes is sufficient to eliminate vergence-accommodation conflicts over a large range of simulated distances. However, accommodation responses have been found to overshoot systematically when the same stimuli are viewed binocularly. Here, we examined the minimum image-plane spacing required for accurate accommodation to binocular depth-filtered images. We compared accommodation and vergence responses to step changes in depth for depth-filtered stimuli, using image-plane separations of 0.6-1.2 D, and equivalent real stimuli. Accommodation responses to real and depth-filtered stimuli were equivalent for image-plane separations of ~0.6-0.9 D, but inaccurate thereafter. We conclude that depth filtering can be used to precisely match accommodation and vergence demand in a practical stereoscopic display, using a relatively small number of image planes.

  14. GridOrbit public display

    DEFF Research Database (Denmark)

    Ramos, Juan David Hincapie; Tabard, Aurélien; Bardram, Jakob

    2010-01-01

    We introduce GridOrbit, a public awareness display that visualizes the activity of a community grid used in a biology laboratory. This community grid executes bioin-formatics algorithms and relies on users to donate CPU cycles to the grid. The goal of GridOrbit is to create a shared awareness about...... the research taking place in the biology laboratory. This should promote contribu-tions to the grid, and thereby mediate the appropriation of the grid technology. GridOrbit visualizes the activity in the grid, shows information about the different active projects, and supports a messaging functionality where...... people comment on projects. Our work explores the usage of interactive technologies as enablers for the appropriation of an otherwise invisible infrastructure....

  15. Jacquard-woven photonic bandgap fiber displays

    CERN Document Server

    Sayed, Imran; Skorobogatiy, Maksim

    2010-01-01

    We present an overview of photonic textile displays woven on a Jacquard loom, using newly discovered polymer photonic bandgap fibers that have the ability to change color and appearance when illuminated with ambient or transmitted light. The photonic fiber can be thin (smaller than 300 microns in diameter) and highly flexible, which makes it possible to weave in the weft on a computerized Jacquard loom and develop intricate double weave structures together with a secondary weft yarn. We demonstrate how photonic crystal fibers enable a variety of color and structural patterns on the textile, and how dynamic imagery can be created by balancing the ambient and emitted radiation. Finally, a possible application in security ware for low visibility conditions is described as an example.

  16. Simplified Night Sky Display System

    Science.gov (United States)

    Castellano, Timothy P.

    2010-01-01

    A document describes a simple night sky display system that is portable, lightweight, and includes, at most, four components in its simplest configuration. The total volume of this system is no more than 10(sup 6) cm(sup 3) in a disassembled state, and weighs no more than 20 kilograms. The four basic components are a computer, a projector, a spherical light-reflecting first surface and mount, and a spherical second surface for display. The computer has temporary or permanent memory that contains at least one signal representing one or more images of a portion of the sky when viewed from an arbitrary position, and at a selected time. The first surface reflector is spherical and receives and reflects the image from the projector onto the second surface, which is shaped like a hemisphere. This system may be used to simulate selected portions of the night sky, preserving the appearance and kinesthetic sense of the celestial sphere surrounding the Earth or any other point in space. These points will then show motions of planets, stars, galaxies, nebulae, and comets that are visible from that position. The images may be motionless, or move with the passage of time. The array of images presented, and vantage points in space, are limited only by the computer software that is available, or can be developed. An optional approach is to have the screen (second surface) self-inflate by means of gas within the enclosed volume, and then self-regulate that gas in order to support itself without any other mechanical support.

  17. Design for an Improved Head-Mounted Display System

    Science.gov (United States)

    2007-11-02

    weight ratio and ease of manufacturing. As shown, for most structural parts, Polyetherimide (PEI or ULTEM ®) is chosen. ULTEM ® keeps its hardness and...Exploded view of the monocular opto-mechanical module. Frame ( Ultem ) Heatsink (Al) Front Housing ( Ultem ) Prism Housing ( Ultem ) Lens Barrel ( Ultem ...and right side with knobs (Al) Crossed Roller Slide (steel) Capture/Mount for Opto- mechanical assembly ( Ultem ) IPD plate ( Ultem ) Total Weight of the

  18. ProPath Display Process Overview

    Data.gov (United States)

    Department of Veterans Affairs — This API displays an overview of the process including the description, goals, associated roles (linked to their detailed information). It also displays all of the...

  19. Holographic Waveguided See-Through Display Project

    Data.gov (United States)

    National Aeronautics and Space Administration — To address the NASA need for lightweight, space suit-mounted displays, Luminit proposes a novel Holographic Waveguided See-Through Display. Our proposed Holographic...

  20. Projection/Reflection Heads-up Display

    Data.gov (United States)

    National Aeronautics and Space Administration — To address the NASA need for an EVA information display device, Physical Optics Corporation (POC) proposes to develop a new Projection/Reflection Heads-up Display...

  1. OZ: An Innovative Primary Flight Display Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed SBIR project will develop OZ, an innovative primary flight display for aircraft. The OZ display, designed from "first principles" of vision science,...

  2. Zero Calibration of Delta Robot Based on Monocular Vision%基于单目视觉的 Delta 机器人零点标定方法

    Institute of Scientific and Technical Information of China (English)

    孙月海; 王兰; 梅江平; 张文昌; 刘艺

    2013-01-01

    For the precision of high-speed pick-and-place parallel robot with lower-mobility in practical engineering, a fast calibration approach was proposed based on vision metrology in this paper. To specify this method,by means of system analysis and reasonable mechanism simplification of Delta robot,a zero error model was established. A zero error identification model using monocular vision was constructed in plane measurement. The zero error could be identified only measuring the positional error of end-effector in x axis and y axis by monocular vision,when the mo-bile platform was in horizontal motion. The error compensation was realized by modifying the ideal zero point position of the system. Calibration experiment results show that the method is simple,effective and strongly practical.%  针对实际工程应用中少自由度高速抓放并联机器人的精度问题,提出了一种基于视觉测量的快速标定方法。以 Delta 机器人为例,通过系统分析和机构合理简化,建立了零点误差模型。构造出基于单目视觉平面测量的零点误差辨识模型,借助单目视觉仅检测机器人动平台沿水平面运动时末端 x 、 y 向的位置误差,识别出零点误差,进而修改零点位置实现末端位置误差补偿。标定实验结果表明该方法简单、有效、实用性强。

  3. Emerging Large-Screen Display Technology

    Science.gov (United States)

    1992-11-01

    1255, Santa Clara, CA. 25. Williams, R. D., and F. Garcia, 1988, "A Real Time Autostereoscopic Multiplanar 3D Display System," Society for Information...K. Miyaji, 1989, " 3D Display using Laser and Moving Screen, Japan Display 1989, Paper P3-5. 27. Sterling, R. D., R. D. TeKolste, J. M. Haggerty, T. C

  4. Data Display Markup Language (DDML) Handbook

    Science.gov (United States)

    2017-01-31

    Moreover, the tendency of T&E is towards a plug-and-play-like data acquisition system that requires standard languages and modules for data displays...Telemetry Group DOCUMENT 127-17 DATA DISPLAY MARKUP LANGUAGE (DDML) HANDBOOK DISTRIBUTION A: APPROVED FOR...DOCUMENT 127-17 DATA DISPLAY MARKUP LANGUAGE (DDML) HANDBOOK January 2017 Prepared by Telemetry Group

  5. Reconfigurable Full-Page Braille Displays

    Science.gov (United States)

    Garner, H. Douglas

    1994-01-01

    Electrically actuated braille display cells of proposed type arrayed together to form full-page braille displays. Like other braille display cells, these provide changeable patterns of bumps driven by digitally recorded text stored on magnetic tapes or in solid-state electronic memories. Proposed cells contain electrorheological fluid. Viscosity of such fluid increases in strong electrostatic field.

  6. 27 CFR 6.83 - Product displays.

    Science.gov (United States)

    2010-04-01

    ... industry member of giving or selling product displays to a retailer does not constitute a means to induce... costs are excluded. (2) All product displays must bear conspicuous and substantial advertising matter on... address of the retailer may appear on the product displays. (3) The giving or selling of such...

  7. Design and test of a situation-augmented display for an unmanned aerial vehicle monitoring task.

    Science.gov (United States)

    Lu, Jen-Li; Horng, Ruey-Yun; Chao, Chin-Jung

    2013-08-01

    In this study, a situation-augmented display for unmanned aerial vehicle (UAV) monitoring was designed, and its effects on operator performance and mental workload were examined. The display design was augmented with the knowledge that there is an invariant flight trajectory (formed by the relationship between altitude and velocity) for every flight, from takeoff to landing. 56 participants were randomly assigned to the situation-augmented display or a conventional display condition to work on 4 (number of abnormalities) x 2 (noise level) UAV monitoring tasks three times. Results showed that the effects of situation-augmented display on flight completion time and time to detect abnormalities were robust under various workload conditions, but error rate and perceived mental workload were unaffected by the display type. Results suggest that the UAV monitoring task is extremely difficult, and that display devices providing high-level situation-awareness may improve operator monitoring performance.

  8. Design of the control system for full-color LED display based on MSP430 MCU

    Science.gov (United States)

    Li, Xue; Xu, Hui-juan; Qin, Ling-ling; Zheng, Long-jiang

    2013-08-01

    The LED display incorporate the micro electronic technique, computer technology and information processing as a whole, it becomes the most preponderant of a new generation of display media with the advantages of bright in color, high dynamic range, high brightness and long operating life, etc. The LED display has been widely used in the bank, securities trading, highway signs, airport and advertising, etc. According to the display color, the LED display screen is divided into monochrome screen, double color display and full color display. With the diversification of the LED display's color and the ceaseless rise of the display demands, the LED display's drive circuit and control technology also get the corresponding progress and development. The earliest monochrome screen just displaying Chinese characters, simple character or digital, so the requirements of the controller are relatively low. With the widely used of the double color LED display, the performance of its controller will also increase. In recent years, the full color LED display with three primary colors of red, green, blue and grayscale display effect has been highly attention with its rich and colorful display effect. Every true color pixel includes three son pixels of red, green, blue, using the space colour mixture to realize the multicolor. The dynamic scanning control system of LED full-color display is designed based on MSP430 microcontroller technology of the low power consumption. The gray control technology of this system used the new method of pulse width modulation (PWM) and 19 games show principle are combining. This method in meet 256 level grayscale display conditions, improves the efficiency of the LED light device, and enhances the administrative levels feels of the image. Drive circuit used 1/8 scanning constant current drive mode, and make full use of the single chip microcomputer I/O mouth resources to complete the control. The system supports text, pictures display of 256 grayscale

  9. Dynamic Visual Displays in Media-Based Instruction.

    Science.gov (United States)

    Park, Ok-choon

    1994-01-01

    Defines instructional roles of dynamic visual displays (DVDs), including animation and interactive video, and provides guidelines for using them in media-based instruction. Topics addressed include theoretical bases; examples of strategic applications of DVDs; six instructional conditions for using DVDs; and considerations for the design and…

  10. CERN students display their work

    CERN Multimedia

    Anaïs Schaeffer

    2011-01-01

    The first poster session by students working on the LHC experiments, organised by the LPCC, was a great success. Showcasing the talents of over a hundred young physicists from all over the world, it was an opportunity for everyone at CERN to check out the wide range of research work being done by the new generation of physicists at CERN.   At 5.30 p.m. on Wednesday 23 March, the first poster session by CERN students took place in Restaurant No.1, where no fewer than 87 posters went on public display. The students were split into 8 groups according to their research field* and all were on hand to answer the questions of an inquisitive audience. TH Department's Michelangelo Mangano, who is head of the LHC Physics Centre at CERN (LPCC) and is responsible for the initiative, confirms that nothing was left to chance, even the choice of date: "We wanted to make the most of the general enthusiasm around the winter conferences and the meeting of the LHC Experiments Committee to present the stud...

  11. Citizenship displayed by disabled people

    Directory of Open Access Journals (Sweden)

    Eliana Prado Carlino

    2010-12-01

    Full Text Available By investigating the processes by which successful teachers become activate citizens and by listening to the diversity and richness of their life and formation stories, this work became possible. Its aim is to display some of the utterances of two Down Syndrome individuals and their active-citizenship activities. Their stories were told in the reports of two teachers when describing their personal and professional history, and were considered to be an integral part of it. Thus, some of the utterances and perceptions with which these two individuals elaborate their references, their worldview and their active-citizenship activity are evidenced in this paper. This article is based on the language conceptions of Vygotsky and Bakhtin who defend the idea that the group and the social mentality are ingrain in the individual. Hence, the history of one person reveals that of many others, since there is a deep link between the individual and the social in the formation of a subjective worldview. As a result, it can be easily seen that the utterances expressed by the participants in this research cannot be considered strictly individual because enunciation is social in nature. Despite the fact that the utterances are those of individuals, they manifest a collective reality. This demonstrates the real advantages and possibilities that deficient people get from their participation and intervention in society.

  12. Stereoscopic depth of field: why we can easily perceive and distinguish the depth of neighboring objects under binocular condition than monocular

    Science.gov (United States)

    Lee, Kwang-Hoon; Park, Min-Chul

    2016-06-01

    In this paper, we introduce a high efficient and practical disparity estimation using hierarchical bilateral filtering for realtime view synthesis. The proposed method is based on hierarchical stereo matching with hardware-efficient bilateral filtering. Hardware-efficient bilateral filtering is different from the exact bilateral filter. The purpose of the method is to design an edge-preserving filter that can be efficiently parallelized on hardware. The proposed hierarchical bilateral filtering based disparity estimation is essentially a coarse-to-fine use of stereo matching with bilateral filtering. It works as follows: firstly, the hierarchical image pyramid are constructed; the multi-scale algorithm then starts by applying a local stereo matching to the downsampled images at the coarsest level of the hierarchy. After the local stereo matching, the estimated disparity map is refined with the bilateral filtering. And then the refined disparity map will be adaptively upsampled to the next finer level. The upsampled disparity map used as a prior of the corresponding local stereo matching at the next level, and filtered and so on. The method we propose is essentially a combination of hierarchical stereo matching and hardware-efficient bilateral filtering. As a result, visual comparison using real-world stereoscopic video clips shows that the method gives better results than one of state-of-art methods in terms of robustness and computation time.

  13. Laser-based displays: a review.

    Science.gov (United States)

    Chellappan, Kishore V; Erden, Erdem; Urey, Hakan

    2010-09-01

    After the invention of lasers, in the past 50 years progress made in laser-based display technology has been very promising, with commercial products awaiting release to the mass market. Compact laser systems, such as edge-emitting diodes, vertical-cavity surface-emitting lasers, and optically pumped semiconductor lasers, are suitable candidates for laser-based displays. Laser speckle is an important concern, as it degrades image quality. Typically, one or multiple speckle reduction techniques are employed in laser displays to reduce speckle contrast. Likewise, laser safety issues need to be carefully evaluated in designing laser displays under different usage scenarios. Laser beam shaping using refractive and diffractive components is an integral part of laser displays, and the requirements depend on the source specifications, modulation technique, and the scanning method being employed in the display. A variety of laser-based displays have been reported, and many products such as pico projectors and laser televisions are commercially available already.

  14. Microspheres in Plasma Display Panels

    Science.gov (United States)

    2006-01-01

    Filling small bubbles of molten glass with gases is just as difficult as it sounds, but the technical staff at NASA is not known to shy away from a difficult task. When Microsphere Systems, Inc. (MSI), of Ypsilanti, Michigan, and Imaging Systems Technology, Inc. (IST), of Toledo, Ohio, were trying to push the limits of plasma displays but were having difficulty with the designs, NASA s Glenn Garrett Morgan Commercialization Initiative (GMCI) assembled key personnel at Glenn Research Center and Ohio State University for a brainstorming session to come up with a solution for the companies. They needed a system that could produce hollow, glass micro-sized spheres (microspheres) that could be filled with a variety of gasses. But the extremely high temperature required to force the micro-sized glass bubbles to form at the tip of a metal nozzle resulted in severe discoloration of the microspheres. After countless experiments on various glass-metal combinations, they had turned to the GMCI for help. NASA experts in advanced metals, ceramics, and glass concluded that a new design approach was necessary. The team determined that what was needed was a phosphate glass composition that would remain transparent, and they went to work on a solution. Six weeks later, using the design tips from the NASA team, Tim Henderson, president of MSI, had designed a new system in which all surfaces in contact with the molten glass would be ceramic instead of metal. Meanwhile, IST was able to complete a Phase I Small Business Innovation Research (SBIR) grant supported by the National Science Foundation (NSF) and supply a potential customer with samples of the microspheres for evaluation as filler materials for high-performance insulations.

  15. Case study: using a stereoscopic display for mission planning

    Science.gov (United States)

    Kleiber, Michael; Winkelholz, Carsten

    2009-02-01

    This paper reports on the results of a study investigating the benefits of using an autostereoscopic display in the training targeting process of the Germain Air Force. The study examined how stereoscopic 3D visualizations can help to improve flight path planning and the preparation of a mission in general. An autostereoscopic display was used because it allows the operator to perceive the stereoscopic images without shutter glasses which facilitates the integration into a workplace with conventional 2D monitors and arbitrary lighting conditions.

  16. MEPR versus EEPR valves in open supermarket refrigerated display cabinets

    Energy Technology Data Exchange (ETDEWEB)

    Tahir, A.; Bansal, P.K. [Auckland Univ, (New Zealand). Dept. of Mechanical Engineering

    2005-02-01

    This paper presents the comparative experimental field performance of mechanical evaporator pressure regulating valves (MEPR) and electronic evaporator pressure regulating valves (EEPR) under the identical operating conditions of supermarket open multi-deck refrigerated display cabinets. The main goal of the supermarket refrigeration system design is to keep the displayed product at the required constant temperature, while minimising the cooling load to increase the overall energy efficiency of the system. Field tests have shown that the electronic evaporator pressure valve has a significant effect on improving the cabinet temperature and reducing the rate of frost formation on the evaporator coils with subsequent improvements in the air curtain strength. (author)

  17. Digital interface for high-resolution displays

    Science.gov (United States)

    Hermann, David J.; Gorenflo, Ronald L.

    1999-08-01

    Commercial display interfaces are currently transitioning from analog to digital. Although this transition is in the very early stages, the military needs to begin planning their own transition to digital. There are many problems with the analog interface in high-resolution display systems that are solved by changing to a digital interface. Also, display system cost can be lower with a digital interface to a high resolution display. Battelle is under contract with DARPA to develop an advanced Display Interface (ADI) to replace the analog RGB interfaces currently used in high definition workstation displays. The goal is to create a standard digital display interface for military applications that is based on emerging commercial standards. Support for military application- specific functionality is addressed, including display test and control. The main challenges to implementing a digital display interface are described, along with approaches to address the problems. Conceptual ADI architectures are described and contrasted. The current and emerging commercial standards for digital display interfaces are reviewed in detail. Finally, the tasks required to complete the ADI effort are outlined and described.

  18. The effects of format in computer-based procedure displays

    Science.gov (United States)

    Desaulniers, David R.; Gillan, Douglas J.; Rudisill, Marianne

    1988-01-01

    Two experiments were conducted to investigate display variables likely to influence the effectiveness of computer-based procedure displays. In experiment 1, procedures were presented in three formats, text, extended-text, and flowchart. Text and extended-text are structured prose formats which differ in the spatial density of presentation. The flowchart format differs from the text format in both syntax and spatial representation. Subjects were required to use the procedures to diagnose a hypothetical system anomaly. The results indicate that performance was most accurate with the flowchart format. In experiment 2, procedure window size was varied (6-line, 12-line, and 24-line) in addition to procedure format. In the six line window condition, experiment 2 replicated the findings of experiment 1. As predicted, completion times for flowchart procedures decreased with increasing window size; however, accuracy of performance decreased substantially. Implications for the design of computer-based procedure displays are discussed.

  19. Laser-driven polyplanar optic display

    Energy Technology Data Exchange (ETDEWEB)

    Veligdan, J.T.; Biscardi, C.; Brewster, C.; DeSanto, L. [Brookhaven National Lab., Upton, NY (United States). Dept. of Advanced Technology; Beiser, L. [Leo Beiser Inc., Flushing, NY (United States)

    1998-01-01

    The Polyplanar Optical Display (POD) is a unique display screen which can be used with any projection source. This display screen is 2 inches thick and has a matte-black face which allows for high contrast images. The prototype being developed is a form, fit and functional replacement display for the B-52 aircraft which uses a monochrome ten-inch display. The new display uses a 200 milliwatt green solid-state laser (532 nm) as its optical source. In order to produce real-time video, the laser light is being modulated by a Digital Light Processing (DLP) chip manufactured by Texas Instruments, Inc. A variable astigmatic focusing system is used to produce a stigmatic image on the viewing face of the POD. In addition to the optical design, the authors discuss the DLP chip, the optomechanical design and viewing angle characteristics.

  20. Spectroradiometric characterization of autostereoscopic 3D displays

    Science.gov (United States)

    Rubiño, Manuel; Salas, Carlos; Pozo, Antonio M.; Castro, J. J.; Pérez-Ocón, Francisco

    2013-11-01

    Spectroradiometric measurements have been made for the experimental characterization of the RGB channels of autostereoscopic 3D displays, giving results for different measurement angles with respect to the normal direction of the plane of the display. In the study, 2 different models of autostereoscopic 3D displays of different sizes and resolutions were used, making measurements with a spectroradiometer (model PR-670 SpectraScan of PhotoResearch). From the measurements made, goniometric results were recorded for luminance contrast, and the fundamental hypotheses have been evaluated for the characterization of the displays: independence of the RGB channels and their constancy. The results show that the display with the lower angle variability in the contrast-ratio value and constancy of the chromaticity coordinates nevertheless presented the greatest additivity deviations with the measurement angle. For both displays, when the parameters evaluated were taken into account, lower angle variability consistently resulted in the 2D mode than in the 3D mode.

  1. Evaluating Ambient Displays in the Wild

    DEFF Research Database (Denmark)

    Messeter, Jörn; Molenaar, Daryn

    A prominent issue for evaluating ambient displays has been the conflict between the relative intrusiveness of evaluation methods and the intention to keep the display at the periphery of the user’s attention. There is a general lack of research discussing the difficulties of evaluating ambient...... displays in the wild, and in particular social aspects of use has received little attention. This paper presents a case study of an ambient light display designed for a public setting. Based on results from a non-intrusive in situ evaluation, we argue that viewing ambient displays as features of a broader...... social setting may aid our understanding of issues regarding the evaluation of ambient displays in the wild....

  2. Parallel large data visualization with display walls

    Science.gov (United States)

    Scheidegger, Luiz; Vo, Huy T.; Krüger, Jens; Silva, Cláudio T.; Comba, João L. D.

    2012-01-01

    While there exist popular software tools that leverage the power of arrays of tiled high resolution displays, they usually require either the use of a particular API or significant programming effort to be properly configured. We present PVW (Parallel Visualization using display Walls), a framework that uses display walls for scientific visualization, requiring minimum labor in setup, programming and configuration. PVW works as a plug-in to pipeline-based visualization software, and allows users to migrate existing visualizations designed for a single-workstation, single-display setup to a large tiled display running on a distributed machine. Our framework is also extensible, allowing different APIs and algorithms to be made display wall-aware with minimum effort.

  3. Conceptual Design of Industrial Process Displays

    DEFF Research Database (Denmark)

    Pedersen, C.R.; Lind, Morten

    1999-01-01

    by a simple example from a plant with batch processes. Later the method is applied to develop a supervisory display for a condenser system in a nuclear power plant. The differences between the continuous plant domain of power production and the batch processes from the example are analysed and broad...... categories of display types are proposed. The problems involved in specification and invention of a supervisory display are analysed and conclusions from these problems are made. It is concluded that the design method proposed provides a framework for the progress of the display design and is useful in pin......-pointing the actual problems. The method was useful in reducing the number of existing displays that could fulfil the requirements of the supervision task. The method provided at the same time a framework for dealing with the problems involved in inventing new displays based on structured analysis. However...

  4. Three-dimensional Imaging, Visualization, and Display

    CERN Document Server

    Javidi, Bahram; Son, Jung-Young

    2009-01-01

    Three-Dimensional Imaging, Visualization, and Display describes recent developments, as well as the prospects and challenges facing 3D imaging, visualization, and display systems and devices. With the rapid advances in electronics, hardware, and software, 3D imaging techniques can now be implemented with commercially available components and can be used for many applications. This volume discusses the state-of-the-art in 3D display and visualization technologies, including binocular, multi-view, holographic, and image reproduction and capture techniques. It also covers 3D optical systems, 3D display instruments, 3D imaging applications, and details several attractive methods for producing 3D moving pictures. This book integrates the background material with new advances and applications in the field, and the available online supplement will include full color videos of 3D display systems. Three-Dimensional Imaging, Visualization, and Display is suitable for electrical engineers, computer scientists, optical e...

  5. Laser Based 3D Volumetric Display System

    Science.gov (United States)

    1993-03-01

    Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye

  6. Matrix-addressable electrochromic display cell

    Science.gov (United States)

    Beni, G.; Schiavone, L. M.

    1981-04-01

    We report an electrochromic display cell with intrinsic matrix addressability. The cell, based on a sputtered iridium oxide film (SIROF) and a tantalum-oxide hysteretic counterelectrode, has electrochromic parameters (i.e., response times, operating voltages, and contrast) similar to those of other SIROF display devices, but in addition, has short-circuit memory and voltage threshold. Memory and threshold are sufficiently large to allow, in principle, multiplexing of electrochromic display panels of large-screen TV pixel size.

  7. Helmet-Mounted Display Design Guide

    Science.gov (United States)

    1997-11-03

    on openStack create menu "CSHMD" set the menuitems of "CSHMD" to "(Main Menu; References;-; Definitions;Display Criteria;Display Formats;Display Modes...34Macintosh" then put ":" into dirSep else put "V’ into dirSep put stackPathO&"Resource"&dirSep into gResPath put 0 into gXRef end openStack on

  8. Future Directions for Astronomical Image Display

    Science.gov (United States)

    Mandel, Eric

    2000-03-01

    In the "Future Directions for Astronomical Image Displav" project, the Smithsonian Astrophysical Observatory (SAO) and the National Optical Astronomy Observatories (NOAO) evolved our existing image display program into fully extensible. cross-platform image display software. We also devised messaging software to support integration of image display into astronomical analysis systems. Finally, we migrated our software from reliance on Unix and the X Window System to a platform-independent architecture that utilizes the cross-platform Tcl/Tk technology.

  9. Analysis of temporal stability of autostereoscopic 3D displays

    Science.gov (United States)

    Rubiño, Manuel; Salas, Carlos; Pozo, Antonio M.; Castro, J. J.; Pérez-Ocón, Francisco

    2013-11-01

    An analysis has been made of the stability of the images generated by electronic autostereoscopic 3D displays, studying the time course of the photometric and colorimetric parameters. The measurements were made on the basis of the procedure recommended in the European guideline EN 61747-6 for the characterization of electronic liquid-crystal displays (LCD). The study uses 3 different models of autostereoscopic 3D displays of different sizes and numbers of pixels, taking the measurements with a spectroradiometer (model PR-670 SpectraScan of PhotoResearch). For each of the displays, the time course is shown for the tristimulus values and the chromaticity coordinates in the XYZ CIE 1931 system and values from the time periods required to reach stable values of these parameters are presented. For the analysis of how the procedure recommended in the guideline EN 61747-6 for 2D displays influenced the results, and for the adaption of the procedure to the characterization of 3D displays, the experimental conditions of the standard procedure were varied, making the stability analysis in the two ocular channels (RE and LE) of the 3D mode and comparing the results with those corresponding to the 2D. The results of our study show that the stabilization time of a autostereoscopic 3D display with parallax barrier technology depends on the tristimulus value analysed (X, Y, Z) as well as on the presentation mode (2D, 3D); furthermore, it was found that whether the 3D mode is used depends on the ocular channel evaluated (RE, LE).

  10. Refreshable Braille Displays Using EAP Actuators

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2010-01-01

    Refreshable Braille can help visually impaired persons benefit from the growing advances in computer technology. The development of such displays in a full screen form is a great challenge due to the need to pack many actuators in small area without interferences. In recent years, various displays using actuators such as piezoelectric stacks have become available in commercial form but most of them are limited to one line Braille code. Researchers in the field of electroactive polymers (EAP) investigated methods of using these materials to form full screen displays. This manuscript reviews the state of the art of producing refreshable Braille displays using EAP-based actuators..

  11. Refreshable Braille displays using EAP actuators

    Science.gov (United States)

    Bar-Cohen, Yoseph

    2010-04-01

    Refreshable Braille can help visually impaired persons benefit from the growing advances in computer technology. The development of such displays in a full screen form is a great challenge due to the need to pack many actuators in small area without interferences. In recent years, various displays using actuators such as piezoelectric stacks have become available in commercial form but most of them are limited to one line Braille code. Researchers in the field of electroactive polymers (EAP) investigated methods of using these materials to form full screen displays. This manuscript reviews the state of the art of producing refreshable Braille displays using EAP-based actuators.

  12. PENGARUH DISPLAY PRODUK PADA KEPUTUSAN PEMBELIAN KONSUMEN

    Directory of Open Access Journals (Sweden)

    Ina Melati

    2012-11-01

    Full Text Available Most of ritel outlet recently using product display as a one of their best marketing strategy, the reason is quiet easy to be understood, since consumers are too easy to be teased by those kind of beautiful product display that is being displayed by the retail outlet. The good retail outlets are trying their best to design and make the very good product display, so they can attract more consumers and make them not thinking twice to visit their store and purchase lots of thing. Clearly seeing that an attractive product design is able to influence a consumer to make a buying decision. 

  13. New ultraportable display technology and applications

    Science.gov (United States)

    Alvelda, Phillip; Lewis, Nancy D.

    1998-08-01

    MicroDisplay devices are based on a combination of technologies rooted in the extreme integration capability of conventionally fabricated CMOS active-matrix liquid crystal display substrates. Customized diffraction grating and optical distortion correction technology for lens-system compensation allow the elimination of many lenses and systems-level components. The MicroDisplay Corporation's miniature integrated information display technology is rapidly leading to many new defense and commercial applications. There are no moving parts in MicroDisplay substrates, and the fabrication of the color generating gratings, already part of the CMOS circuit fabrication process, is effectively cost and manufacturing process-free. The entire suite of the MicroDisplay Corporation's technologies was devised to create a line of application- specific integrated circuit single-chip display systems with integrated computing, memory, and communication circuitry. Next-generation portable communication, computer, and consumer electronic devices such as truly portable monitor and TV projectors, eyeglass and head mounted displays, pagers and Personal Communication Services hand-sets, and wristwatch-mounted video phones are among the may target commercial markets for MicroDisplay technology. Defense applications range from Maintenance and Repair support, to night-vision systems, to portable projectors for mobile command and control centers.

  14. Testing Instrument for Flight-Simulator Displays

    Science.gov (United States)

    Haines, Richard F.

    1987-01-01

    Displays for flight-training simulators rapidly aligned with aid of integrated optical instrument. Calibrations and tests such as aligning boresight of display with respect to user's eyes, checking and adjusting display horizon, checking image sharpness, measuring illuminance of displayed scenes, and measuring distance of optical focus of scene performed with single unit. New instrument combines all measurement devices in single, compact, integrated unit. Requires just one initial setup. Employs laser and produces narrow, collimated beam for greater measurement accuracy. Uses only one moving part, double right prism, to position laser beam.

  15. Microencapsulated Electrophoretic Films for Electronic Paper Displays

    Science.gov (United States)

    Amundson, Karl

    2003-03-01

    Despite the dominance of liquid crystal displays, they do not perform some functions very well. While backlit liquid crystal displays can offer excellent color performance, they wash out in bright lighting and suffer from high power consumption. Reflective liquid crystal displays have limited brightness, making these devices challenging to read for long periods of time. Flexible liquid crystal displays are difficult to manufacture and keep stable. All of these attributes (long battery lifetime, bright reflective appearance, compatibility with flexible substrates) are traits that would be found in an ideal electronic paper display - an updateable substitute for paper that could be employed in electronic books, newspapers, and other applications. I will discuss technologies that are being developed for electronic-paper-like displays, and especially on particle-based technologies. A microencapsulated electrophoretic display technology is being developed at the E Ink corporation. This display film offers offer high brightness and an ink-on-paper appearance, compatibility with flexible substrates, and image stability that can lead to very low power consumption. I will present some of the physical and chemical challenges associated with making display films with high performance.

  16. Framework for effective use of multiple displays

    Science.gov (United States)

    Liu, Qiong; Kimber, Don; Zhao, Frank; Huang, Jeffrey

    2005-10-01

    Meeting environments, such as conference rooms, executive briefing centers, and exhibition spaces, are now commonly equipped with multiple displays, and will become increasingly display-rich in the future. Existing authoring/presentation tools such as PowerPoint, however, provide little support for effective utilization of multiple displays. Even using advanced multi-display enabled multimedia presentation tools, the task of assigning material to displays is tedious and distracts presenters from focusing on content. This paper describes a framework for automatically assigning presentation material to displays, based on a model of the quality of views of audience members. The framework is based on a model of visual fidelity which takes into account presentation content, audience members' locations, the limited resolution of human eyes, and display location, orientation, size, resolution, and frame rate. The model can be used to determine presentation material placement based on average or worst case audience member view quality, and to warn about material that would be illegible. By integrating this framework with a previous system for multi-display presentation [PreAuthor, others], we created a tool that accepts PowerPoint and/or other media input files, and automatically generates a layout of material onto displays for each state of the presentation. The tool also provides an interface allowing the presenter to modify the automatically generated layout before or during the actual presentation. This paper discusses the framework, possible application scenarios, examples of the system behavior, and our experience with system use.

  17. Pengaruh Display Produk pada Keputusan Pembelian Konsumen

    Directory of Open Access Journals (Sweden)

    Ina Melati

    2012-10-01

    Full Text Available Most of ritel outlet recently using product display as a one of their best marketing strategy, the reason is quiet easy to be understood, since consumers are too easy to be teased by those kind of beautiful product display that is being displayed by the retail outlet. The good retail outlets are trying their best to design and make the very good product display, so they can attract more consumers and make them not thinking twice to visit their store and purchase lots of thing. Clearly seeing that an attractive product design is able to influence a consumer to make a buying decision.

  18. Display of nuclear medicine imaging studies

    CERN Document Server

    Singh, B; Samuel, A M

    2002-01-01

    Nuclear medicine imaging studies involve evaluation of a large amount of image data. Digital signal processing techniques have introduced processing algorithms that increase the information content of the display. Nuclear medicine imaging studies require interactive selection of suitable form of display and pre-display processing. Static imaging study requires pre-display processing to detect focal defects. Point operations (histogram modification) along with zoom and capability to display more than one image in one screen is essential. This album mode of display is also applicable to dynamic, MUGA and SPECT data. Isometric display or 3-D graph of the image data is helpful in some cases e.g. point spread function, flood field data. Cine display is used on a sequence of images e.g. dynamic, MUGA and SPECT imaging studies -to assess the spatial movement of tracer with time. Following methods are used at the investigator's discretion for inspection of the 3-D object. 1) Display of orthogonal projections, 2) Disp...

  19. Volumetric Three-Dimensional Display Systems

    Science.gov (United States)

    Blundell, Barry G.; Schwarz, Adam J.

    2000-03-01

    A comprehensive study of approaches to three-dimensional visualization by volumetric display systems This groundbreaking volume provides an unbiased and in-depth discussion on a broad range of volumetric three-dimensional display systems. It examines the history, development, design, and future of these displays, and considers their potential for application to key areas in which visualization plays a major role. Drawing substantially on material that was previously unpublished or available only in patent form, the authors establish the first comprehensive technical and mathematical formalization of the field, and examine a number of different volumetric architectures. System level design strategies are presented, from which proposals for the next generation of high-definition predictable volumetric systems are developed. To ensure that researchers will benefit from work already completed, they provide: * Descriptions of several recent volumetric display systems prepared from material supplied by the teams that created them * An abstract volumetric display system design paradigm * An historical summary of 90 years of development in volumetric display system technology * An assessment of the strengths and weaknesses of many of the systems proposed to date * A unified presentation of the underlying principles of volumetric display systems * A comprehensive bibliography Beautifully supplemented with 17 color plates that illustrate volumetric images and prototype displays, Volumetric Three-Dimensional Display Systems is an indispensable resource for professionals in imaging systems development, scientific visualization, medical imaging, computer graphics, aerospace, military planning, and CAD/CAE.

  20. Sexual display and mate choice in an energetically costly environment.

    Science.gov (United States)

    Head, Megan L; Wong, Bob B M; Brooks, Robert

    2010-12-09

    Sexual displays and mate choice often take place under the same set of environmental conditions and, as a consequence, may be exposed to the same set of environmental constraints. Surprisingly, however, very few studies consider the effects of environmental costs on sexual displays and mate choice simultaneously. We conducted an experiment, manipulating water flow in large flume tanks, to examine how an energetically costly environment might affect the sexual display and mate choice behavior of male and female guppies, Poecilia reticulata. We found that male guppies performed fewer sexual displays and became less choosy, with respect to female size, in the presence of a water current compared to those tested in still water. In contrast to males, female responsive to male displays did not differ between the water current treatments and females exhibited no mate preferences with respect to male size or coloration in either treatment. The results of our study underscore the importance of considering the simultaneous effects of environmental costs on the sexual behaviors of both sexes.

  1. A modular display system for insect behavioral neuroscience.

    Science.gov (United States)

    Reiser, Michael B; Dickinson, Michael H

    2008-01-30

    Flying insects exhibit stunning behavioral repertoires that are largely mediated by the visual control of flight. For this reason, presenting a controlled visual environment to tethered insects has been and continues to be a powerful tool for studying the sensory control of complex behaviors. To create an easily controlled, scalable, and customizable visual stimulus, we have designed a modular system, based on panels composed of an 8 x 8 array of individual LEDs, that may be connected together to 'tile' an experimental environment with controllable displays. The panels have been designed to be extremely bright, with the added flexibility of individual-pixel brightness control, allowing experimentation over a broad range of behaviorally relevant conditions. Patterns to be displayed may be designed using custom software, downloaded to a controller board, and displayed on the individually addressed panels via a rapid communication interface. The panels are controlled by a microprocessor-based display controller which, for most experiments, will not require a computer in the loop, greatly reducing the experimental infrastructure. This technology allows an experimenter to build and program a visual arena with a customized geometry in a matter of hours. To demonstrate the utility of this system, we present results from experiments with tethered Drosophila melanogaster: (1) in a cylindrical arena composed of 44 panels, used to test the contrast dependence of object orientation behavior, and (2) above a 30-panel floor display, used to examine the effects of ground motion on orientation during flight.

  2. Modeling and display of 3D human body based on monocular vision measurement%基于单目视觉测量的人体建模与显示

    Institute of Scientific and Technical Information of China (English)

    盛光有; 姜寿山; 张欣; 崔芳芳

    2009-01-01

    以一种基于单目视觉测量原理的三维人体扫描装置获得的人体数据为来源,运用三角面片法构建人体表面,并把人体模型保存为一种标准的模型格式文件--OBJ文件.在Visual C++的编程环境中采用OpenGL(Open Graphics Library)作为图形接口,编程显示了人体模型.

  3. 3D Navigation and Integrated Hazard Display in Advanced Avionics: Workload, Performance, and Situation Awareness

    Science.gov (United States)

    Wickens, Christopher D.; Alexander, Amy L.

    2004-01-01

    We examined the ability for pilots to estimate traffic location in an Integrated Hazard Display, and how such estimations should be measured. Twelve pilots viewed static images of traffic scenarios and then estimated the outside world locations of queried traffic represented in one of three display types (2D coplanar, 3D exocentric, and split-screen) and in one of four conditions (display present/blank crossed with outside world present/blank). Overall, the 2D coplanar display best supported both vertical (compared to 3D) and lateral (compared to split-screen) traffic position estimation performance. Costs of the 3D display were associated with perceptual ambiguity. Costs of the split screen display were inferred to result from inappropriate attention allocation. Furthermore, although pilots were faster in estimating traffic locations when relying on memory, accuracy was greatest when the display was available.

  4. Using the human eye to characterize displays

    Science.gov (United States)

    Gille, Jennifer; Larimer, James O.

    2001-06-01

    Monitor characterization has taken on new importance for non-professional users, who are not usually equipped to make photometric measurements. Our purpose was to examine some of the visual judgements used in characterization schemes that have been proposed for web users. We studied adjusting brightness to set the black level, banding effects du to digitization, and gamma estimation in the light an din the dark, and a color-matching tasks in the light, on a desktop CRT and a laptop LCD. Observers demonstrated the sensitivity of the visual system for comparative judgements in black- level adjustment, banding visibility, and gamma estimation. The results of the color-matching task were ambiguous. In the brightness adjustment task, the action of the adjustment was not as presumed; however, perceptual judgements were as expected under the actual conditions. Whenthe gamma estimates of observers were compared to photometric measurements, pro9blems with the definition of gamma were identified. Information about absolute light levels that would be important for characterizing a display, given the shortcomings of gamma in measuring apparent contrast, are not measurable by eye alone. The LCD was not studied as extensively as the CRT because of viewing-angle problems, and its transfer function did not follow a power law, rendering gamma estimation meaningless.

  5. An Evaluation of Detect and Avoid (DAA) Displays for Unmanned Aircraft Systems: The Effect of Information Level and Display Location on Pilot Performance

    Science.gov (United States)

    Fern, Lisa; Rorie, R. Conrad; Pack, Jessica S.; Shively, R. Jay; Draper, Mark H.

    2015-01-01

    A consortium of government, industry and academia is currently working to establish minimum operational performance standards for Detect and Avoid (DAA) and Control and Communications (C2) systems in order to enable broader integration of Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS). One subset of these performance standards will need to address the DAA display requirements that support an acceptable level of pilot performance. From a pilot's perspective, the DAA task is the maintenance of self separation and collision avoidance from other aircraft, utilizing the available information and controls within the Ground Control Station (GCS), including the DAA display. The pilot-in-the-loop DAA task requires the pilot to carry out three major functions: 1) detect a potential threat, 2) determine an appropriate resolution maneuver, and 3) execute that resolution maneuver via the GCS control and navigation interface(s). The purpose of the present study was to examine two main questions with respect to DAA display considerations that could impact pilots' ability to maintain well clear from other aircraft. First, what is the effect of a minimum (or basic) information display compared to an advanced information display on pilot performance? Second, what is the effect of display location on UAS pilot performance? Two levels of information level (basic, advanced) were compared across two levels of display location (standalone, integrated), for a total of four displays. The authors propose an eight-stage pilot-DAA interaction timeline from which several pilot response time metrics can be extracted. These metrics were compared across the four display conditions. The results indicate that the advanced displays had faster overall response times compared to the basic displays, however, there were no significant differences between the standalone and integrated displays. Implications of the findings on understanding pilot performance on the DAA task, the

  6. SHAPE AND ALBEDO FROM SHADING (SAfS FOR PIXEL-LEVEL DEM GENERATION FROM MONOCULAR IMAGES CONSTRAINED BY LOW-RESOLUTION DEM

    Directory of Open Access Journals (Sweden)

    B. Wu

    2016-06-01

    Full Text Available Lunar topographic information, e.g., lunar DEM (Digital Elevation Model, is very important for lunar exploration missions and scientific research. Lunar DEMs are typically generated from photogrammetric image processing or laser altimetry, of which photogrammetric methods require multiple stereo images of an area. DEMs generated from these methods are usually achieved by various interpolation techniques, leading to interpolation artifacts in the resulting DEM. On the other hand, photometric shape reconstruction, e.g., SfS (Shape from Shading, extensively studied in the field of Computer Vision has been introduced to pixel-level resolution DEM refinement. SfS methods have the ability to reconstruct pixel-wise terrain details that explain a given image of the terrain. If the terrain and its corresponding pixel-wise albedo were to be estimated simultaneously, this is a SAfS (Shape and Albedo from Shading problem and it will be under-determined without additional information. Previous works show strong statistical regularities in albedo of natural objects, and this is even more logically valid in the case of lunar surface due to its lower surface albedo complexity than the Earth. In this paper we suggest a method that refines a lower-resolution DEM to pixel-level resolution given a monocular image of the coverage with known light source, at the same time we also estimate the corresponding pixel-wise albedo map. We regulate the behaviour of albedo and shape such that the optimized terrain and albedo are the likely solutions that explain the corresponding image. The parameters in the approach are optimized through a kernel-based relaxation framework to gain computational advantages. In this research we experimentally employ the Lunar-Lambertian model for reflectance modelling; the framework of the algorithm is expected to be independent of a specific reflectance model. Experiments are carried out using the monocular images from Lunar Reconnaissance

  7. Design of the Surgical Navigation Based on Monocular Vision%单目视觉手术导航的系统设计

    Institute of Scientific and Technical Information of China (English)

    刘大鹏; 张巍; 徐子昂

    2016-01-01

    Objective: Existing orthopedic surgical navigation system makes surgery accurate and intraoperative X-ray exposure reduce to the traditional surgery, but the apparatus body is large and operation complicate, difficult to effectively shorten the operation time. This paper introduces a monocular vision navigation system to solve this problem. Methods: Monocular vision navigation using visible light image processing system, and set the overall hardware platform based on validated algorithms and designs used for knee replacement surgery procedures. Result & Conclusion: Relative to the previous method of non-contact dimensional localization, our system can keep the accuracy while reducing the hardware volume and simplifying the navigation process, also has features such as iterative development, low cost, particularly suitable for medium and small orthopaedics surgery.%目的:现有的骨科手术导航系统在提高手术精度和减少术中X线暴露方面具有传统手术无法比拟的优势,但设备体较大,操作繁琐,难以有效缩短手术时间。因此,介绍一种利用可见光的单目视觉导航系统解决此问题。方法:采用可见光的单目视觉作为手术导航的图像处理系统,并在此基础上设定整体硬件平台,验证相关算法,并设计了针对膝关节置换手术的使用操作流程。结果及结论:相对以往的非接触式立体定位方法,本系统在保证精度的同时减小设备体积,简化导航流程,兼具可重复开发、成本低廉等特性,适用于中小型骨科手术。

  8. Shape and Albedo from Shading (SAfS) for Pixel-Level dem Generation from Monocular Images Constrained by Low-Resolution dem

    Science.gov (United States)

    Wu, Bo; Chung Liu, Wai; Grumpe, Arne; Wöhler, Christian

    2016-06-01

    Lunar topographic information, e.g., lunar DEM (Digital Elevation Model), is very important for lunar exploration missions and scientific research. Lunar DEMs are typically generated from photogrammetric image processing or laser altimetry, of which photogrammetric methods require multiple stereo images of an area. DEMs generated from these methods are usually achieved by various interpolation techniques, leading to interpolation artifacts in the resulting DEM. On the other hand, photometric shape reconstruction, e.g., SfS (Shape from Shading), extensively studied in the field of Computer Vision has been introduced to pixel-level resolution DEM refinement. SfS methods have the ability to reconstruct pixel-wise terrain details that explain a given image of the terrain. If the terrain and its corresponding pixel-wise albedo were to be estimated simultaneously, this is a SAfS (Shape and Albedo from Shading) problem and it will be under-determined without additional information. Previous works show strong statistical regularities in albedo of natural objects, and this is even more logically valid in the case of lunar surface due to its lower surface albedo complexity than the Earth. In this paper we suggest a method that refines a lower-resolution DEM to pixel-level resolution given a monocular image of the coverage with known light source, at the same time we also estimate the corresponding pixel-wise albedo map. We regulate the behaviour of albedo and shape such that the optimized terrain and albedo are the likely solutions that explain the corresponding image. The parameters in the approach are optimized through a kernel-based relaxation framework to gain computational advantages. In this research we experimentally employ the Lunar-Lambertian model for reflectance modelling; the framework of the algorithm is expected to be independent of a specific reflectance model. Experiments are carried out using the monocular images from Lunar Reconnaissance Orbiter (LRO

  9. Display Developer for Firing Room Applications

    Science.gov (United States)

    Bowman, Elizabeth A.

    2013-01-01

    The firing room at Kennedy Space Center (KSC) is responsible for all NASA human spaceflight launch operations, therefore it is vital that all displays within the firing room be properly tested, up-to-date, and user-friendly during a launch. The Ground Main Propulsion System (GMPS) requires a number of remote displays for Vehicle Integration and Launch (VIL) Operations at KSC. My project is to develop remote displays for the GMPS using the Display Services and Framework (DSF) editor. These remote displays will be based on model images provided by GMPS through PowerPoint. Using the DSF editor, the PowerPoint images can be recreated with active buttons associated with the correct Compact Unique Identifiers (CUIs). These displays will be documented in the Software Requirements and Design Specifications (SRDS) at the 90% GMPS Design Review. In the future, these remote displays will be available for other developers to improve, edit, or add on to so that the display may be incorporated into the firing room to be used for launches.

  10. Interruption and Pausing of Public Display Games

    DEFF Research Database (Denmark)

    Feuchtner, Tiare; Walter, Robert; Müller, Jörg

    2016-01-01

    We present a quantitative and qualitative analysis of interruptions of interaction with a public display game, and explore the use of a manual pause mode in this scenario. In previous public display installations we observed users frequently interrupting their interaction. To explore ways of supp...

  11. Additive and subtractive transparent depth displays

    NARCIS (Netherlands)

    Kooi, F.L.; Toet, A.

    2003-01-01

    Image fusion is the generally preferred method to combine two or more images for visual display on a single screen. We demonstrate that perceptual image separation may be preferable over perceptual image fusion for the combined display of enhanced and synthetic imagery. In this context image separat

  12. Assessment of OLED displays for vision research.

    Science.gov (United States)

    Cooper, Emily A; Jiang, Haomiao; Vildavski, Vladimir; Farrell, Joyce E; Norcia, Anthony M

    2013-10-23

    Vision researchers rely on visual display technology for the presentation of stimuli to human and nonhuman observers. Verifying that the desired and displayed visual patterns match along dimensions such as luminance, spectrum, and spatial and temporal frequency is an essential part of developing controlled experiments. With cathode-ray tubes (CRTs) becoming virtually unavailable on the commercial market, it is useful to determine the characteristics of newly available displays based on organic light emitting diode (OLED) panels to determine how well they may serve to produce visual stimuli. This report describes a series of measurements summarizing the properties of images displayed on two commercially available OLED displays: the Sony Trimaster EL BVM-F250 and PVM-2541. The results show that the OLED displays have large contrast ratios, wide color gamuts, and precise, well-behaved temporal responses. Correct adjustment of the settings on both models produced luminance nonlinearities that were well predicted by a power function ("gamma correction"). Both displays have adjustable pixel independence and can be set to have little to no spatial pixel interactions. OLED displays appear to be a suitable, or even preferable, option for many vision research applications.

  13. Teacher Portfolios: Displaying the Art of Teaching

    Science.gov (United States)

    Reese, Susan

    2004-01-01

    A portfolio can convey a teacher's beliefs, knowledge and skills. An artist uses a portfolio to display artistic talent, and a teacher can use his or her portfolio to display teaching talent and teaching style. A teacher's portfolio may be used to obtain new employment, to document teaching accomplishments in order to receive a promotion or tenure…

  14. Methods for Selecting Phage Display Antibody Libraries.

    Science.gov (United States)

    Jara-Acevedo, Ricardo; Diez, Paula; Gonzalez-Gonzalez, Maria; Degano, Rosa Maria; Ibarrola, Nieves; Gongora, Rafael; Orfao, Alberto; Fuentes, Manuel

    2016-01-01

    The selection process aims sequential enrichment of phage antibody display library in clones that recognize the target of interest or antigen as the library undergoes successive rounds of selection. In this review, selection methods most commonly used for phage display antibody libraries have been comprehensively described.

  15. Interruption and Pausing of Public Display Games

    DEFF Research Database (Denmark)

    Feuchtner, Tiare; Walter, Robert; Müller, Jörg

    We present a quantitative and qualitative analysis of interruptions of interaction with a public display game, and explore the use of a manual pause mode in this scenario. In previous public display installations we observed users frequently interrupting their interaction. To explore ways...... of supporting such behavior, we implemented a gesture controlled multiuser game with four pausing techniques. We evaluated them in a field study analyzing 704 users and found that our pausing techniques were eagerly explored, but rarely used with the intention to pause the game. Our study shows...... that interactions with public displays are considerably intermissive, and that users mostly interrupt interaction to socialize and mainly approach public displays in groups. We conclude that, as a typical characteristic of public display interaction, interruptions deserve consideration. However, manual pause modes...

  16. Three-dimensional hologram display system

    Science.gov (United States)

    Mintz, Frederick (Inventor); Chao, Tien-Hsin (Inventor); Bryant, Nevin (Inventor); Tsou, Peter (Inventor)

    2009-01-01

    The present invention relates to a three-dimensional (3D) hologram display system. The 3D hologram display system includes a projector device for projecting an image upon a display medium to form a 3D hologram. The 3D hologram is formed such that a viewer can view the holographic image from multiple angles up to 360 degrees. Multiple display media are described, namely a spinning diffusive screen, a circular diffuser screen, and an aerogel. The spinning diffusive screen utilizes spatial light modulators to control the image such that the 3D image is displayed on the rotating screen in a time-multiplexing manner. The circular diffuser screen includes multiple, simultaneously-operated projectors to project the image onto the circular diffuser screen from a plurality of locations, thereby forming the 3D image. The aerogel can use the projection device described as applicable to either the spinning diffusive screen or the circular diffuser screen.

  17. An interactive multiview 3D display system

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Zhang, Mei; Dong, Hui

    2013-03-01

    The progresses in 3D display systems and user interaction technologies will help more effective 3D visualization of 3D information. They yield a realistic representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them. In this paper, we describe an autostereoscopic multiview 3D display system with capability of real-time user interaction. Design principle of this autostereoscopic multiview 3D display system is presented, together with the details of its hardware/software architecture. A prototype is built and tested based upon multi-projectors and horizontal optical anisotropic display structure. Experimental results illustrate the effectiveness of this novel 3D display and user interaction system.

  18. Real time speech formant analyzer and display

    Energy Technology Data Exchange (ETDEWEB)

    Holland, George E. (Ames, IA); Struve, Walter S. (Ames, IA); Homer, John F. (Ames, IA)

    1987-01-01

    A speech analyzer for interpretation of sound includes a sound input which converts the sound into a signal representing the sound. The signal is passed through a plurality of frequency pass filters to derive a plurality of frequency formants. These formants are converted to voltage signals by frequency-to-voltage converters and then are prepared for visual display in continuous real time. Parameters from the inputted sound are also derived and displayed. The display may then be interpreted by the user. The preferred embodiment includes a microprocessor which is interfaced with a television set for displaying of the sound formants. The microprocessor software enables the sound analyzer to present a variety of display modes for interpretive and therapeutic used by the user.

  19. Extraction and Analysis of Display Data

    Science.gov (United States)

    Land, Chris; Moye, Kathryn

    2008-01-01

    The Display Audit Suite is an integrated package of software tools that partly automates the detection of Portable Computer System (PCS) Display errors. [PCS is a lap top computer used onboard the International Space Station (ISS).] The need for automation stems from the large quantity of PCS displays (6,000+, with 1,000,000+ lines of command and telemetry data). The Display Audit Suite includes data-extraction tools, automatic error detection tools, and database tools for generating analysis spread sheets. These spread sheets allow engineers to more easily identify many different kinds of possible errors. The Suite supports over 40 independent analyses, 16 NASA Tech Briefs, November 2008 and complements formal testing by being comprehensive (all displays can be checked) and by revealing errors that are difficult to detect via test. In addition, the Suite can be run early in the development cycle to find and correct errors in advance of testing.

  20. 2D/3D switchable displays

    Science.gov (United States)

    Dekker, T.; de Zwart, S. T.; Willemsen, O. H.; Hiddink, M. G. H.; IJzerman, W. L.

    2006-02-01

    A prerequisite for a wide market acceptance of 3D displays is the ability to switch between 3D and full resolution 2D. In this paper we present a robust and cost effective concept for an auto-stereoscopic switchable 2D/3D display. The display is based on an LCD panel, equipped with switchable LC-filled lenticular lenses. We will discuss 3D image quality, with the focus on display uniformity. We show that slanting the lenticulars in combination with a good lens design can minimize non-uniformities in our 20" 2D/3D monitors. Furthermore, we introduce fractional viewing systems as a very robust concept to further improve uniformity in the case slanting the lenticulars and optimizing the lens design are not sufficient. We will discuss measurements and numerical simulations of the key optical characteristics of this display. Finally, we discuss 2D image quality, the switching characteristics and the residual lens effect.

  1. Hewlett-Packard's Approaches to Full Color Reflective Displays

    Science.gov (United States)

    Gibson, Gary

    2012-02-01

    Reflective displays are desirable in applications requiring low power or daylight readability. However, commercial reflective displays are currently either monochrome or capable of only dim color gamuts. Low cost, high-quality color technology would be rapidly adopted in existing reflective display markets and would enable new solutions in areas such as retail pricing and outdoor digital signage. Technical breakthroughs are required to enable bright color gamuts at reasonable cost. Pixel architectures that rely on pure reflection from a single layer of side-by-side primary-color sub-pixels use only a fraction of the display area to reflect incident light of a given color and are, therefore, unacceptably dark. Reflective devices employing stacked color primaries offer the possibility of a somewhat brighter color gamut but can be more complex to manufacture. In this talk, we describe HP's successes in addressing these fundamental challenges and creating both high performance stacked-primary reflective color displays as well as inexpensive single layer prototypes that provide good color. Our stacked displays utilize a combination of careful light management techniques, proprietary high-contrast electro-optic shutters, and highly transparent active-matrix TFT arrays based on transparent metal oxides. They also offer the possibility of relatively low cost manufacturing through roll-to-roll processing on plastic webs. To create even lower cost color displays with acceptable brightness, we have developed means for utilizing photoluminescence to make more efficient use of ambient light in a single layer device. Existing reflective displays create a desired color by reflecting a portion of the incident spectrum while absorbing undesired wavelengths. We have developed methods for converting the otherwise-wasted absorbed light to desired wavelengths via tailored photoluminescent composites. Here we describe a single active layer prototype display that utilizes these materials

  2. The Effects of a Point-of-Purchase Display on Relative Sales: An In-Store Experimental Evaluation

    Science.gov (United States)

    Sigurdsson, Valdimar; Engilbertsson, Halldor; Foxall, Gordon

    2010-01-01

    An in-store experiment was performed to investigate the effects of a point-of-purchase display on unit sales of dishwashing liquid. The experimental conditions consisted of periodically placing two copies of the same display in convenience stores and supermarkets. The results were unanticipated; point-of-purchase displays did not change relative…

  3. The Effects of a Point-of-Purchase Display on Relative Sales: An In-Store Experimental Evaluation

    Science.gov (United States)

    Sigurdsson, Valdimar; Engilbertsson, Halldor; Foxall, Gordon

    2010-01-01

    An in-store experiment was performed to investigate the effects of a point-of-purchase display on unit sales of dishwashing liquid. The experimental conditions consisted of periodically placing two copies of the same display in convenience stores and supermarkets. The results were unanticipated; point-of-purchase displays did not change relative…

  4. Optimal screening of surface-displayed polypeptide libraries.

    Science.gov (United States)

    Boder, E T; Wittrup, K D

    1998-01-01

    Cell surface display of polypeptide libraries combined with flow cytometric cell sorting presents remarkable potential for enhancement of protein-ligand recognition properties. To maximize the utility of this approach, screening and purification conditions must be optimized to take full advantage of the quantitative feature of this technique. In particular, discrimination of improved library mutants from an excess of wild-type polypeptides is dependent upon an effective screening methodology. Fluorescence discrimination profiles for improved library mutants were derived from a mathematical model of expected cell fluorescence intensities for polypeptide libraries screened with fluorescent ligand. Profiles for surface-displayed libraries under equilibrium or kinetic screening conditions demonstrate distinct discrimination optima from which optimal equilibrium and kinetic screening parameters were derived. In addition, a statistical model of low cytometrically analyzed cell populations indicates the importance of low-stringency sorting followed by amplification through regrowth and resorting at increased stringency. This analysis further yields quantitative recommendations for cell-sorting stringency.

  5. Comparison of Pilots' Situational Awareness While Monitoring Autoland Approaches Using Conventional and Advanced Flight Display Formats

    Science.gov (United States)

    Kramer, Lynda J.; Busquets, Anthony M.

    2000-01-01

    A simulation experiment was performed to assess situation awareness (SA) and workload of pilots while monitoring simulated autoland operations in Instrument Meteorological Conditions with three advanced display concepts: two enhanced electronic flight information system (EFIS)-type display concepts and one totally synthetic, integrated pictorial display concept. Each concept incorporated sensor-derived wireframe runway and iconic depictions of sensor-detected traffic in different locations on the display media. Various scenarios, involving conflicting traffic situation assessments, main display failures, and navigation/autopilot system errors, were used to assess the pilots' SA and workload during autoland approaches with the display concepts. From the results, for each scenario, the integrated pictorial display concept provided the pilots with statistically equivalent or substantially improved SA over the other display concepts. In addition to increased SA, subjective rankings indicated that the pictorial concept offered reductions in overall pilot workload (in both mean ranking and spread) over the two enhanced EFIS-type display concepts. Out of the display concepts flown, the pilots ranked the pictorial concept as the display that was easiest to use to maintain situational awareness, to monitor an autoland approach, to interpret information from the runway and obstacle detecting sensor systems, and to make the decision to go around.

  6. Dynamics of the near response under natural viewing conditions with an open-view sensor.

    Science.gov (United States)

    Chirre, Emmanuel; Prieto, Pedro; Artal, Pablo

    2015-10-01

    We have studied the temporal dynamics of the near response (accommodation, convergence and pupil constriction) in healthy subjects when accommodation was performed under natural binocular and monocular viewing conditions. A binocular open-view multi-sensor based on an invisible infrared Hartmann-Shack sensor was used for non-invasive measurements of both eyes simultaneously in real time at 25Hz. Response times for each process under different conditions were measured. The accommodative responses for binocular vision were faster than for monocular conditions. When one eye was blocked, accommodation and convergence were triggered simultaneously and synchronized, despite the fact that no retinal disparity was available. We found that upon the onset of the near target, the unblocked eye rapidly changes its line of sight to fix it on the stimulus while the blocked eye moves in the same direction, producing the equivalent to a saccade, but then converges to the (blocked) target in synchrony with accommodation. This open-view instrument could be further used for additional experiments with other tasks and conditions.

  7. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

    Science.gov (United States)

    Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun

    2015-07-03

    Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

  8. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    Science.gov (United States)

    Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun

    2015-01-01

    Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203

  9. Segmented refraction of the crystalline lens as a prerequisite for the occurrence of monocular polyplopia, increased depth of focus, and contrast sensitivity function notches

    Energy Technology Data Exchange (ETDEWEB)

    Bour, L. [Graduate School of Neurosciences Amsterdam, Department of Neurology, Clinical Neurophysiology Unit, Academic Medical Center, Amsterdam (Netherlands); Apkarian, P. [Department of Physiology I, Erasmus University, Rotterdam (Netherlands)

    1994-11-01

    Theoretical computations of modulation transfer functions (MTF`s) of the optical system of the human eye have shown that irregular aberration consisting of a small circular segment with refractive power slightly different from the surround introduces at higher spatial frequencies ({gt}20 cpd) an enhancement of the retinal image contrast on flanks of the optimum-focus plane. When the pupil size is larger than 3 mm, enhancement is substantial; as a result, multiple foci appear at the affected, higher spatial frequencies and generate a greater depth of focus. The contrast enhancement also produces troughs on either flank of the optimum-focus plane. With slight coincident defocus ({plus_minus}0.5 diopter) of the retinal image of a sine-wave grating, notches in the MTF curves, with a contrast reduction in the intermediate frequency range of a factor of 2 to 3 and a low cutoff spatial frequency of {similar_to} 3 cycles/deg, are produced. In our theoretical study, multiple foci, monocular polyplopia, and increased depth of focus are implicated in the generation of contrast sensitivity function (CSF) notches. It is demonstrated that CSF notches of optical origin can extend to lower spatial frequencies ({lt}10 cycles/deg). As a result, before the presence of a CSF notch can be attributed to neurological abnormality, optical factors, including irregular aberrations, must be eliminated.

  10. Reduced responsiveness to long-term monocular deprivation of parvalbumin neurons assessed by c-Fos staining in rat visual cortex.

    Directory of Open Access Journals (Sweden)

    Marco Mainardi

    Full Text Available BACKGROUND: It is generally assumed that visual cortical cells homogeneously shift their ocular dominance (OD in response to monocular deprivation (MD, however little experimental evidence directly supports this notion. By using immunohistochemistry for the activity-dependent markers c-Fos and Arc, coupled with staining for markers of inhibitory cortical sub-populations, we studied whether long-term MD initiated at P21 differentially affects visual response of inhibitory neurons in rat binocular primary visual cortex. METHODOLOGY/PRINCIPAL FINDINGS: The inhibitory markers GAD67, parvalbumin (PV, calbindin (CB and calretinin (CR were used. Visually activated Arc did not colocalize with PV and was discarded from further studies. MD decreased visually induced c-Fos activation in GAD67 and CR positive neurons. The CB population responded to MD with a decrease of CB expression, while PV cells did not show any effect of MD on c-Fos expression. The persistence of c-Fos expression induced by deprived eye stimulation in PV cells is not likely to be due to a particularly low threshold for activity-dependent c-Fos induction. Indeed, c-Fos induction by increasing concentrations of the GABAA antagonist picrotoxin in visual cortical slices was similar between PV cells and the other cortical neurons. CONCLUSION: These data indicate that PV cells are particularly refractory to MD, suggesting that different cortical subpopulation may show different response to MD.

  11. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    Directory of Open Access Journals (Sweden)

    Taekjun Oh

    2015-07-01

    Full Text Available Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

  12. Ergonomic concerns with lightbar guidance displays.

    Science.gov (United States)

    Ima, C S; Mann, D D

    2004-05-01

    This article reviews some ergonomic factors associated with agricultural guidance displays. Any technology or management decision that improves the efficiency of an agricultural operation can be considered an aspect of precision farming. Agricultural guidance displays are one such tool because they help to reduce guidance error (i.e., skipping and overlapping of implements within the field), which result in improper application of crop inputs at increased cost. Although each of the guidance displays currently available functions using a different principle, their key objective is to communicate useful guidance information to the operator of the agricultural machine. The case with which the operator obtains the required information depends on a number of ergonomic factors, such as color perceptibility, flash rate, attentional demand, display size, viewing distance, and height of placement of the display in the cab. Ergonomics can be defined as the application of knowledge to create a safe, comfortable, and effective work environment. Consequently, it is critical to consider ergonomics when designing guidance displays or when locating a display in the tractor cab. Without considering ergonomics, it is unlikely that the efficiency of the human-machine system can be optimized.

  13. Crosstalk in stereoscopic displays: a review

    Science.gov (United States)

    Woods, Andrew J.

    2012-10-01

    Crosstalk, also known as ghosting or leakage, is a primary factor in determining the image quality of stereoscopic three dimensional (3D) displays. In a stereoscopic display, a separate perspective view is presented to each of the observer's two eyes in order to experience a 3D image with depth sensation. When crosstalk is present in a stereoscopic display, each eye will see a combination of the image intended for that eye, and some of the image intended for the other eye-making the image look doubled or ghosted. High levels of crosstalk can make stereoscopic images hard to fuse and lack fidelity, so it is important to achieve low levels of crosstalk in the development of high-quality stereoscopic displays. Descriptive and mathematical definitions of these terms are formalized and summarized. The mechanisms by which crosstalk occurs in different stereoscopic display technologies are also reviewed, including micropol 3D liquid crystal displays (LCDs), autostereoscopic (lenticular and parallax barrier), polarized projection, anaglyph, and time-sequential 3D on LCDs, plasma display panels and cathode ray tubes. Crosstalk reduction and crosstalk cancellation are also discussed along with methods of measuring and simulating crosstalk.

  14. Potential improvements for dual directional view displays.

    Science.gov (United States)

    Mather, Jonathan; Parry Jones, Lesley; Gass, Paul; Imai, Akira; Takatani, Tomoo; Yabuta, Koji

    2014-02-01

    Dual directional view (DDV) displays show different images to different viewers. For example, the driver of a car looking at a central DDV display could view navigation information, while the passenger, looking from a different angle, could be watching a movie. This technology, which has now established itself on the dashboards of high-end Jaguar, Mercedes, and Range Rover cars, is manufactured by Sharp Corporation using a well-known parallax barrier technique. Unfortunately parallax barriers are associated with an inevitable drop in brightness compared with a single view display. A parallax barrier-based DDV display typically has less than half the transmission of a single view display. Here we present a solution to these problems via the use of a combined microlens and parallax barrier system, which can not only boost the brightness by 55% from a parallax barrier-only system but increase the head freedom by 25% and reduce crosstalk also. However, the use of microlenses (which must be positioned between the polarizers of the LCD) can adversely affect the contrast ratio of the display. Careful choice of the LCD mode is therefore required in order to create a DDV display that is both high in brightness and contrast ratio. The use of a single-domain vertically aligned nematic (VAN) liquid crystal (LC) mode, together with a microlens plus parallax barrier system can achieve this with a contrast ratio of 1700∶1 measured at 30° to normal incidence.

  15. Touch sensitive electrorheological fluid based tactile display

    Science.gov (United States)

    Liu, Yanju; Davidson, Rob; Taylor, Paul

    2005-12-01

    A tactile display is programmable device whose controlled surface is intended to be investigated by human touch. It has a great number of potential applications in the field of virtual reality and elsewhere. In this research, a 5 × 5 tactile display array including electrorheological (ER) fluid has been developed and investigated. Force responses of the tactile display array have been measured while a probe was moved across the upper surface. The purpose of this was to simulate the action of touch performed by human finger. Experimental results show that the sensed surface information could be controlled effectively by adjusting the voltage activation pattern imposed on the tactels. The performance of the tactile display is durable and repeatable. The touch sensitivity of this ER fluid based tactile display array has also been investigated in this research. The results show that it is possible to sense the touching force normal to the display's surface by monitoring the change of current passing through the ER fluid. These encouraging results are helpful for constructing a new type of tactile display based on ER fluid which can act as both sensor and actuator at the same time.

  16. Military market for flat panel displays

    Science.gov (United States)

    Desjardins, Daniel D.; Hopper, Darrel G.

    1997-07-01

    This paper addresses the number, function and size of primary military displays and establishes a basis to determine the opportunities for technology insertion in the immediate future and into the next millennium. The military displays market is specified by such parameters as active area and footprint size, and other characteristics such as luminance, gray scale, resolution, color capability and night vision imaging system capability. A select grouping of funded, future acquisitions, planned and predicted cockpit kits, and form-fit-function upgrades are taken into account. It is the intent of this paper to provide an overview of the DoD niche market, allowing both government and industry a timely reference to insure meeting DoD requirements for flat-panel displays on schedule and in a cost-effective manner. The aggregate DoD market for direct view displays is presently estimated to be in excess of 157,000. Helmet/head mounted displays will add substantially to this total. The vanishing vendor syndrome for older display technologies is becoming a growing, pervasive problem throughout DoD, which consequently just leverage the more modern display technologies being developed for civil-commercial markets.

  17. [Odor sensing system and olfactory display].

    Science.gov (United States)

    Nakamoto, Takamichi

    2014-01-01

    In this review, an odor sensing system and an olfactory display are introduced into people in pharmacy. An odor sensing system consists of an array of sensors with partially overlapping specificities and pattern recognition technique. One of examples of odor sensing systems is a halitosis sensor which quantifies the mixture composition of three volatile sulfide compounds. A halitosis sensor was realized using a preconcentrator to raise sensitivity and an electrochemical sensor array to suppress the influence of humidity. Partial least squares (PLS) method was used to quantify the mixture composition. The experiment reveals that the sufficient accuracy was obtained. Moreover, the olfactory display, which present scents to human noses, is explained. A multi-component olfactory display enables the presentation of a variety of smells. The two types of multi-component olfactory display are described. The first one uses many solenoid valves with high speed switching. The valve ON frequency determines the concentration of the corresponding odor component. The latter one consists of miniaturized liquid pumps and a surface acoustic wave (SAW) atomizer. It enables the wearable olfactory display without smell persistence. Finally, the application of the olfactory display is demonstrated. Virtual ice cream shop with scents was made as a content of interactive art. People can enjoy harmony among vision, audition and olfaction. In conclusion, both odor sensing system and olfactory display can contribute to the field of human health care.

  18. Evaluation of display technologies for Internet of Things (IoT)

    Science.gov (United States)

    Sabo, Julia; Fegert, Tobias; Cisowski, Matthäus Stephanus; Marsal, Anatolij; Eichberger, Domenik; Blankenbach, Karlheinz

    2017-02-01

    Internet of Things (IoT) is a booming industry. We investigated several (semi-) professional IoT devices in combination with displays (focus on reflective technologies) and LEDs. First, these displays were compared for reflectance and ambient light performance. Two measurement set-ups with diffuse conditions were used for simulating typical indoor lighting conditions of IoT displays. E-paper displays were evaluated best as they combine a relative high reflectance with large contrast ratio. Reflective monochrome LCDs show a lower reflectance but are widely available. Second we studied IoT microprocessors interfaces to displays. A µP can drive single LEDs and one or two Seg 8 LED digits directly by GPIOs. Other display technologies require display controllers with a parallel or serial interface to the microprocessor as they need dedicated waveforms for driving the pixels. Most suitable are display modules with built-in display RAM as only pixel data have to be transferred which changes. A HDMI output (e.g. Raspberry Pi) results in high cost for the displays, therefore AMLCDs are not suitable for low to medium cost IoT systems. We compared and evaluated furthermore status indicators, icons, text and graphics IoT display systems regarding human machine interface (HMI) characteristics and effectiveness as well as power consumption. We found out that low resolution graphics bistable e-paper displays are the most appropriate display technology for IoT systems as they show as well information after a power failure or power switch off during maintenance or e.g. QR codes for installation. LED indicators are the most cost effective approach which has however very limited HMI capabilities.

  19. Transparent 3D display for augmented reality

    Science.gov (United States)

    Lee, Byoungho; Hong, Jisoo

    2012-11-01

    Two types of transparent three-dimensional display systems applicable for the augmented reality are demonstrated. One of them is a head-mounted-display-type implementation which utilizes the principle of the system adopting the concave floating lens to the virtual mode integral imaging. Such configuration has an advantage in that the threedimensional image can be displayed at sufficiently far distance resolving the accommodation conflict with the real world scene. Incorporating the convex half mirror, which shows a partial transparency, instead of the concave floating lens, makes it possible to implement the transparent three-dimensional display system. The other type is the projection-type implementation, which is more appropriate for the general use than the head-mounted-display-type implementation. Its imaging principle is based on the well-known reflection-type integral imaging. We realize the feature of transparent display by imposing the partial transparency to the array of concave mirror which is used for the screen of reflection-type integral imaging. Two types of configurations, relying on incoherent and coherent light sources, are both possible. For the incoherent configuration, we introduce the concave half mirror array, whereas the coherent one adopts the holographic optical element which replicates the functionality of the lenslet array. Though the projection-type implementation is beneficial than the head-mounted-display in principle, the present status of the technical advance of the spatial light modulator still does not provide the satisfactory visual quality of the displayed three-dimensional image. Hence we expect that the head-mounted-display-type and projection-type implementations will come up in the market in sequence.

  20. Memory effect in ac plasma displays

    Science.gov (United States)

    Szlenk, K.; Obuchowicz, E.

    1993-10-01

    The bistable or `memory' mode of operation of an ac plasma display panel is presented. The difference between dc and ac plasma panel operation from the point of view of memory function is discussed. The graphic ac plasma display with thin film Cr-Cu-Cr electrodes was developed in OBREP and its basic parameters are described. It consists of 36 X 59 picture elements, its outer dimensions are: 76 X 52 mm2 and the screen size is: 49 X 30 mm2. The different dielectric glass materials were applied as dielectric layers and the influence of the properties of these materials on display parameters and memory function was investigated.