WorldWideScience

Sample records for monocular stereoscopic images

  1. Monocular and binocular edges enhance the perception of stereoscopic slant.

    Science.gov (United States)

    Wardle, Susan G; Palmisano, Stephen; Gillam, Barbara J

    2014-07-01

    Gradients of absolute binocular disparity across a slanted surface are often considered the basis for stereoscopic slant perception. However, perceived stereo slant around a vertical axis is usually slow and significantly under-estimated for isolated surfaces. Perceived slant is enhanced when surrounding surfaces provide a relative disparity gradient or depth step at the edges of the slanted surface, and also in the presence of monocular occlusion regions (sidebands). Here we investigate how different kinds of depth information at surface edges enhance stereo slant about a vertical axis. In Experiment 1, perceived slant decreased with increasing surface width, suggesting that the relative disparity between the left and right edges was used to judge slant. Adding monocular sidebands increased perceived slant for all surface widths. In Experiment 2, observers matched the slant of surfaces that were isolated or had a context of either monocular or binocular sidebands in the frontal plane. Both types of sidebands significantly increased perceived slant, but the effect was greater with binocular sidebands. These results were replicated in a second paradigm in which observers matched the depth of two probe dots positioned in front of slanted surfaces (Experiment 3). A large bias occurred for the surface without sidebands, yet this bias was reduced when monocular sidebands were present, and was nearly eliminated with binocular sidebands. Our results provide evidence for the importance of edges in stereo slant perception, and show that depth from monocular occlusion geometry and binocular disparity may interact to resolve complex 3D scenes. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Stereoscopic image recoloring

    Science.gov (United States)

    Li, Xujie; Zhao, Hanli; Huang, Hui; Xiao, Lei; Hu, Zhongyi; Shao, Jingkai

    2016-09-01

    Image recoloring is the process of modification and adjustment of color appearance in images. Existing methods address the recoloring of a single image. We propose a method for recoloring stereoscopic images. Naively recoloring each image independently will require a pair of strokes in the source stereoscopic image pair. However, it is difficult to require consistent strokes on both the left and right views. We show how to extend a single image recoloring to work on stereoscopic images. Our method requires only a few user strokes on the left view and automatically transfers the corresponding strokes to the right view. Then a nonlocal color linear model optimization assumption is designed. Our nonlocal color linear model inherits the advantages of global and local color propagation methods. Our approach can propagate color cues in a global manner which can propagate color relatively far from the provided color constraints, while it provides the user with good local control. The experimental results show that the recolorized image pair is geometrically consistent with the original one.

  3. Perceptual Depth Quality in Distorted Stereoscopic Images.

    Science.gov (United States)

    Wang, Jiheng; Wang, Shiqi; Ma, Kede; Wang, Zhou

    2017-03-01

    Subjective and objective measurement of the perceptual quality of depth information in symmetrically and asymmetrically distorted stereoscopic images is a fundamentally important issue in stereoscopic 3D imaging that has not been deeply investigated. Here, we first carry out a subjective test following the traditional absolute category rating protocol widely used in general image quality assessment research. We find this approach problematic, because monocular cues and the spatial quality of images have strong impact on the depth quality scores given by subjects, making it difficult to single out the actual contributions of stereoscopic cues in depth perception. To overcome this problem, we carry out a novel subjective study where depth effect is synthesized at different depth levels before various types and levels of symmetric and asymmetric distortions are applied. Instead of following the traditional approach, we ask subjects to identify and label depth polarizations, and a depth perception difficulty index (DPDI) is developed based on the percentage of correct and incorrect subject judgements. We find this approach highly effective at quantifying depth perception induced by stereo cues and observe a number of interesting effects regarding image content dependency, distortion-type dependence, and the impact of symmetric versus asymmetric distortions. Furthermore, we propose a novel computational model for DPDI prediction. Our results show that the proposed model, without explicitly identifying image distortion types, leads to highly promising DPDI prediction performance. We believe that these are useful steps toward building a comprehensive understanding on 3D quality-of-experience of stereoscopic images.

  4. Stereoscopic 3D-scene synthesis from a monocular camera with an electrically tunable lens

    Science.gov (United States)

    Alonso, Julia R.

    2016-09-01

    3D-scene acquisition and representation is important in many areas ranging from medical imaging to visual entertainment application. In this regard, optical imaging acquisition combined with post-capture processing algorithms enable the synthesis of images with novel viewpoints of a scene. This work presents a new method to reconstruct a pair of stereoscopic images of a 3D-scene from a multi-focus image stack. A conventional monocular camera combined with an electrically tunable lens (ETL) is used for image acquisition. The captured visual information is reorganized considering a piecewise-planar image formation model with a depth-variant point spread function (PSF) along with the known focusing distances at which the images of the stack were acquired. The consideration of a depth-variant PSF allows the application of the method to strongly defocused multi-focus image stacks. Finally, post-capture perspective shifts, presenting each eye the corresponding viewpoint according to the disparity, are generated by simulating the displacement of a synthetic pinhole camera. The procedure is performed without estimation of the depth-map or segmentation of the in-focus regions. Experimental results for both real and synthetic data images are provided and presented as anaglyphs, but it could easily be implemented in 3D displays based in parallax barrier or polarized light.

  5. Changing perspective in stereoscopic images.

    Science.gov (United States)

    Du, Song-Pei; Hu, Shi-Min; Martin, Ralph R

    2013-08-01

    Traditional image editing techniques cannot be directly used to edit stereoscopic ("3D") media, as extra constraints are needed to ensure consistent changes are made to both left and right images. Here, we consider manipulating perspective in stereoscopic pairs. A straightforward approach based on depth recovery is unsatisfactory: Instead, we use feature correspondences between stereoscopic image pairs. Given a new, user-specified perspective, we determine correspondence constraints under this perspective and optimize a 2D warp for each image that preserves straight lines and guarantees proper stereopsis relative to the new camera. Experiments verify that our method generates new stereoscopic views that correspond well to expected projections, for a wide range of specified perspective. Various advanced camera effects, such as dolly zoom and wide angle effects, can also be readily generated for stereoscopic image pairs using our method.

  6. Monocular 3D display unit using soft actuator for parallax image shift

    Science.gov (United States)

    Sakamoto, Kunio; Kodama, Yuuki

    2010-11-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator system. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a soft linear actuator made of a polypyrrole film.

  7. Avoiding monocular artifacts in clinical stereotests presented on column-interleaved digital stereoscopic displays.

    Science.gov (United States)

    Serrano-Pedraza, Ignacio; Vancleef, Kathleen; Read, Jenny C A

    2016-11-01

    New forms of stereoscopic 3-D technology offer vision scientists new opportunities for research, but also come with distinct problems. Here we consider autostereo displays where the two eyes' images are spatially interleaved in alternating columns of pixels and no glasses or special optics are required. Column-interleaved displays produce an excellent stereoscopic effect, but subtle changes in the angle of view can increase cross talk or even interchange the left and right eyes' images. This creates several challenges to the presentation of cyclopean stereograms (containing structure which is only detectable by binocular vision). We discuss the potential artifacts, including one that is unique to column-interleaved displays, whereby scene elements such as dots in a random-dot stereogram appear wider or narrower depending on the sign of their disparity. We derive an algorithm for creating stimuli which are free from this artifact. We show that this and other artifacts can be avoided by (a) using a task which is robust to disparity-sign inversion-for example, a disparity-detection rather than discrimination task-(b) using our proposed algorithm to ensure that parallax is applied symmetrically on the column-interleaved display, and (c) using a dynamic stimulus to avoid monocular artifacts from motion parallax. In order to test our recommendations, we performed two experiments using a stereoacuity task implemented with a parallax-barrier tablet. Our results confirm that these recommendations eliminate the artifacts. We believe that these recommendations will be useful to vision scientists interested in running stereo psychophysics experiments using parallax-barrier and other column-interleaved digital displays.

  8. Saliency detection for stereoscopic images.

    Science.gov (United States)

    Fang, Yuming; Wang, Junle; Narwaria, Manish; Le Callet, Patrick; Lin, Weisi

    2014-06-01

    Many saliency detection models for 2D images have been proposed for various multimedia processing applications during the past decades. Currently, the emerging applications of stereoscopic display require new saliency detection models for salient region extraction. Different from saliency detection for 2D images, the depth feature has to be taken into account in saliency detection for stereoscopic images. In this paper, we propose a novel stereoscopic saliency detection framework based on the feature contrast of color, luminance, texture, and depth. Four types of features, namely color, luminance, texture, and depth, are extracted from discrete cosine transform coefficients for feature contrast calculation. A Gaussian model of the spatial distance between image patches is adopted for consideration of local and global contrast calculation. Then, a new fusion method is designed to combine the feature maps to obtain the final saliency map for stereoscopic images. In addition, we adopt the center bias factor and human visual acuity, the important characteristics of the human visual system, to enhance the final saliency map for stereoscopic images. Experimental results on eye tracking databases show the superior performance of the proposed model over other existing methods.

  9. P2-1: Visual Short-Term Memory Lacks Sensitivity to Stereoscopic Depth Changes but is Much Sensitive to Monocular Depth Changes

    Directory of Open Access Journals (Sweden)

    Hae-In Kang

    2012-10-01

    Full Text Available Depth from both binocular disparity and monocular depth cues presumably is one of most salient features that would characterize a variety of visual objects in our daily life. Therefore it is plausible to expect that human vision should be good at perceiving objects' depth change arising from binocular disparities and monocular pictorial cues. However, what if the estimated depth needs to be remembered in visual short-term memory (VSTM rather than just perceived? In a series of experiments, we asked participants to remember depth of items in an array at the beginning of each trial. A set of test items followed after the memory array, and the participants were asked to report if one of the items in the test array have changed its depth from the remembered items or not. The items would differ from each other in three different depth conditions: (1 stereoscopic depth under binocular disparity manipulations, (2 monocular depth under pictorial cue manipulations, and (3 both stereoscopic and monocular depth. The accuracy of detecting depth change was substantially higher in the monocular condition than in the binocular condition, and the accuracy in the both-depth condition was moderately improved compared to the monocular condition. These results indicate that VSTM benefits more from monocular depth than stereoscopic depth, and further suggests that storage of depth information into VSTM would require both binocular and monocular information for its optimal memory performance.

  10. Consciousness and stereoscopic environmental imaging

    Science.gov (United States)

    Mason, Steve

    2014-02-01

    The question of human consciousness has intrigued philosophers and scientists for centuries: its nature, how we perceive our environment, how we think, our very awareness of thought and self. It has been suggested that stereoscopic vision is "a paradigm of how the mind works" 1 In depth perception, laws of perspective are known, reasoned, committed to memory from an early age; stereopsis, on the other hand, is a 3D experience governed by strict laws but actively joined within the brain―one sees it without explanation. How do we, in fact, process two different images into one 3D module within the mind and does an awareness of this process give us insight into the workings of our own consciousness? To translate this idea to imaging I employed ChromaDepth™ 3D glasses that rely on light being refracted in a different direction for each eye―colors of differing wavelengths appearing at varying distances from the viewer resulting in a 3D space. This involves neither calculation nor manufacture of two images or views. Environmental spatial imaging was developed―a 3D image was generated that literally surrounds the viewer. The image was printed and adhered to a semi-circular mount; the viewer then entered the interior to experience colored shapes suspended in a 3D space with an apparent loss of surface, or picture plane, upon which the image is rendered. By focusing our awareness through perception-based imaging we are able to gain a deeper understanding of how the brain works, how we see.

  11. Stereoscopic wide field of view imaging system

    Science.gov (United States)

    Prechtl, Eric F. (Inventor); Sedwick, Raymond J. (Inventor); Jonas, Eric M. (Inventor)

    2011-01-01

    A stereoscopic imaging system incorporates a plurality of imaging devices or cameras to generate a high resolution, wide field of view image database from which images can be combined in real time to provide wide field of view or panoramic or omni-directional still or video images.

  12. Bayesian depth estimation from monocular natural images.

    Science.gov (United States)

    Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C

    2017-05-01

    Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.

  13. Neural correlates of monocular and binocular depth cues based on natural images: a LORETA analysis.

    Science.gov (United States)

    Fischmeister, Florian Ph S; Bauer, Herbert

    2006-10-01

    Functional imaging studies investigating perception of depth rely solely on one type of depth cue based on non-natural stimulus material. To overcome these limitations and to provide a more realistic and complete set of depth cues natural stereoscopic images were used in this study. Using slow cortical potentials and source localization we aimed to identify the neural correlates of monocular and binocular depth cues. This study confirms and extends functional imaging studies, showing that natural images provide a good, reliable, and more realistic alternative to artificial stimuli, and demonstrates the possibility to separate the processing of different depth cues.

  14. Using stereoscopic imaging for visualization applications

    Energy Technology Data Exchange (ETDEWEB)

    Adelson, S.J.

    1994-02-01

    The purpose of scientific visualization is to simplify the analysis of numerical data by rendering the information as an image. Even when the image is familiar, as in the case of terrain data, preconceptions about what the image should look like and deceptive image artifacts can create misconceptions about what information is actually contained in the scene. One way of aiding the development of unambiguous visualizations is to add stereoscopic depth to the image. Despite the recent proliferation of affordable stereoscopic viewing equipment, few researchers are at this time taking advantage of stereo in their visualizations. It is generally perceived that the rendering time will have to be doubled in order to generate the pair, and so stereoscopic viewing is sacrificed in the name of expedient rendering. We show that this perception is often invalid. The second half of a stereoscopic image can be generated from the first half for a fraction of the computational cost of complete rendering, usually no more than 50% of the cost and in many cases as little as 5%. Using the techniques presented here, the benefits of stereoscopy can be added to existing visualization systems for only a small cost over current single-frame rendering methods.

  15. Depth Perception and the History of Three-Dimensional Art: Who Produced the First Stereoscopic Images?

    Science.gov (United States)

    2017-01-01

    The history of the expression of three-dimensional structure in art can be traced from the use of occlusion in Palaeolithic cave paintings, through the use of shadow in classical art, to the development of perspective during the Renaissance. However, the history of the use of stereoscopic techniques is controversial. Although the first undisputed stereoscopic images were presented by Wheatstone in 1838, it has been claimed that two sketches by Jacopo Chimenti da Empoli (c. 1600) can be to be fused to yield an impression of stereoscopic depth, while others suggest that Leonardo da Vinci’s Mona Lisa is the world’s first stereogram. Here, we report the first quantitative study of perceived depth in these works, in addition to more recent works by Salvador Dalí. To control for the contribution of monocular depth cues, ratings of the magnitude and coherence of depth were recorded for both stereoscopic and pseudoscopic presentations, with a genuine contribution of stereoscopic cues revealed by a difference between these scores. Although effects were clear for Wheatstone and Dalí’s images, no such effects could be found for works produced earlier. As such, we have no evidence to reject the conventional view that the first producer of stereoscopic imagery was Sir Charles Wheatstone. PMID:28203349

  16. Depth Perception and the History of Three-Dimensional Art: Who Produced the First Stereoscopic Images?

    Science.gov (United States)

    Brooks, Kevin R

    2017-01-01

    The history of the expression of three-dimensional structure in art can be traced from the use of occlusion in Palaeolithic cave paintings, through the use of shadow in classical art, to the development of perspective during the Renaissance. However, the history of the use of stereoscopic techniques is controversial. Although the first undisputed stereoscopic images were presented by Wheatstone in 1838, it has been claimed that two sketches by Jacopo Chimenti da Empoli (c. 1600) can be to be fused to yield an impression of stereoscopic depth, while others suggest that Leonardo da Vinci's Mona Lisa is the world's first stereogram. Here, we report the first quantitative study of perceived depth in these works, in addition to more recent works by Salvador Dalí. To control for the contribution of monocular depth cues, ratings of the magnitude and coherence of depth were recorded for both stereoscopic and pseudoscopic presentations, with a genuine contribution of stereoscopic cues revealed by a difference between these scores. Although effects were clear for Wheatstone and Dalí's images, no such effects could be found for works produced earlier. As such, we have no evidence to reject the conventional view that the first producer of stereoscopic imagery was Sir Charles Wheatstone.

  17. Generating Stereoscopic Television Images With One Camera

    Science.gov (United States)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  18. Venus - Stereoscopic Images of Volcanic Domes

    Science.gov (United States)

    1991-01-01

    This Magellan image depicts a stereoscopic pair of an area on Venus with small volcanic domes. Stereoscopic images of Venus offer exciting new possibilities for scientific analysis of Venusian landforms, such as the domes shown here, impact craters, graben -- long rifts bounded by faults -- and other geologic features. Stereopsis, or a three-dimensional view of this scene, may be obtained by viewing with a stereoscope. One may also cut this photograph into two parts and look at the left image with the left eye and the right image with the right eye; conjugate images (the same features) should be about 5 centimeters (2 inches) apart when viewing. This area is located at 38.4 degrees south latitude and 78.3 degrees east longitude. The incidence, or look, angle of the left image is 28.5 degrees and that of the right image is 15.6 degrees. Radar illumination for both images comes from the left. A small dome at left center is about 140 meters (464 feet) high and 6 kilometers (3.7 miles) wide. Other domes with smaller relief can be perceived in three dimensions. At the smaller incidence angle used to acquire the image on the right, radar brightness is more sensitive to small changes in topography. This enhances the visibility of many of the domes in this scene.

  19. Monocular zones in stereoscopic scenes: A useful source of information for human binocular vision?

    Science.gov (United States)

    Harris, Julie M.

    2010-02-01

    When an object is closer to an observer than the background, the small differences between right and left eye views are interpreted by the human brain as depth. This basic ability of the human visual system, called stereopsis, lies at the core of all binocular three-dimensional (3-D) perception and related technological display development. To achieve stereopsis, it is traditionally assumed that corresponding locations in the right and left eye's views must first be matched, then the relative differences between right and left eye locations are used to calculate depth. But this is not the whole story. At every object-background boundary, there are regions of the background that only one eye can see because, in the other eye's view, the foreground object occludes that region of background. Such monocular zones do not have a corresponding match in the other eye's view and can thus cause problems for depth extraction algorithms. In this paper I will discuss evidence, from our knowledge of human visual perception, illustrating that monocular zones do not pose problems for our human visual systems, rather, our visual systems can extract depth from such zones. I review the relevant human perception literature in this area, and show some recent data aimed at quantifying the perception of depth from monocular zones. The paper finishes with a discussion of the potential importance of considering monocular zones, for stereo display technology and depth compression algorithms.

  20. Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance.

    Directory of Open Access Journals (Sweden)

    Christopher A Mela

    Full Text Available We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b the first wearable system offering both large FOV and microscopic imaging simultaneously,

  1. Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance.

    Science.gov (United States)

    Mela, Christopher A; Patterson, Carrie; Thompson, William K; Papay, Francis; Liu, Yang

    2015-01-01

    We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a) the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b) the first wearable system offering both large FOV and microscopic imaging simultaneously,

  2. Intermediate view synthesis from stereoscopic images

    Institute of Scientific and Technical Information of China (English)

    Lü Chaohui; An Ping; Zhang Zhaoyang

    2005-01-01

    A new method is proposed for synthesizing intermediate views from a pair of stereoscopic images. In order to synthesize high-quality intermediate views, the block matching method together with a simplified multi-window technique and dynamic programming is used in the process of disparity estimation. Then occlusion detection is performed to locate occluded regions and their disparities are compensated. After the projecton of the left-to-right and right-to-left disparities onto the intermediate image, intermediate view is synthesized considering occluded regions. Experimental results show that our synthesis method can obtain intermediate views with higher quality.

  3. Vergence and accommodation to multiple-image-plane stereoscopic displays: ``real world'' responses with practical image-plane separations?

    Science.gov (United States)

    MacKenzie, Kevin J.; Dickson, Ruth A.; Watt, Simon J.

    2012-01-01

    Conventional stereoscopic displays present images on a single focal plane. The resulting mismatch between the stimuli to the eyes' focusing response (accommodation) and to convergence causes fatigue and poor stereo performance. One solution is to distribute image intensity across a number of widely spaced image planes--a technique referred to as depth filtering. Previously, we found this elicits accurate, continuous monocular accommodation responses with image-plane separations as large as 1.1 Diopters (D, the reciprocal of distance in meters), suggesting that a small number of image planes could eliminate vergence-accommodation conflicts over a large range of simulated distances. Evidence exists, however, of systematic differences between accommodation responses to binocular and monocular stimuli when the stimulus to accommodation is degraded, or at an incorrect distance. We examined the minimum image-plane spacing required for accurate accommodation to binocular depth-filtered images. We compared accommodation and vergence responses to changes in depth specified by depth filtering, using image-plane separations of 0.6 to 1.2 D, and equivalent real stimuli. Accommodation responses to real and depth-filtered stimuli were equivalent for image-plane separations of ~0.6 to 0.9 D, but differed thereafter. We conclude that depth filtering can be used to precisely match accommodation and vergence demand in a practical stereoscopic display.

  4. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    Science.gov (United States)

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images.

  5. StereoPasting: interactive composition in stereoscopic images.

    Science.gov (United States)

    Tong, Ruo-Feng; Zhang, Yun; Cheng, Ke-Li

    2013-08-01

    We propose "StereoPasting," an efficient method for depth-consistent stereoscopic composition, in which a source 2D image is interactively blended into a target stereoscopic image. As we paint "disparity" on a 2D image, the disparity map of the selected region is gradually produced by edge-aware diffusion, and then blended with that of the target stereoscopic image. By considering constraints of the expected disparities and perspective scaling, the 2D object is warped to generate an image pair, which is then blended into the target image pair to get the composition result. The warping is formulated as an energy minimization, which could be solved in real time. We also present an interactive composition system, in which users can edit the disparity maps of 2D images by strokes, while viewing the composition results instantly. Experiments show that our method is intuitive and efficient for interactive stereoscopic composition. A lot of applications demonstrate the versatility of our method.

  6. Evaluation of stereoscopic 3D displays for image analysis tasks

    Science.gov (United States)

    Peinsipp-Byma, E.; Rehfeld, N.; Eck, R.

    2009-02-01

    In many application domains the analysis of aerial or satellite images plays an important role. The use of stereoscopic display technologies can enhance the image analyst's ability to detect or to identify certain objects of interest, which results in a higher performance. Changing image acquisition from analog to digital techniques entailed the change of stereoscopic visualisation techniques. Recently different kinds of digital stereoscopic display techniques with affordable prices have appeared on the market. At Fraunhofer IITB usability tests were carried out to find out (1) with which kind of these commercially available stereoscopic display techniques image analysts achieve the best performance and (2) which of these techniques achieve a high acceptance. First, image analysts were interviewed to define typical image analysis tasks which were expected to be solved with a higher performance using stereoscopic display techniques. Next, observer experiments were carried out whereby image analysts had to solve defined tasks with different visualization techniques. Based on the experimental results (performance parameters and qualitative subjective evaluations of the used display techniques) two of the examined stereoscopic display technologies were found to be very good and appropriate.

  7. Determination of plant height for weed detection in stereoscopic images

    OpenAIRE

    Piron, Alexis; Leemans, Vincent; Kleynen, Olivier; Destain, Marie-France

    2008-01-01

    The aim of this study was twofold. The first goal was to acquire high accuracy stereoscopic images of small-scale field scenes, the second to examine the potential of plant height as a discriminant factor between crop and weed, within carrot rows. Emphasis was put on how to determine actual plant height taking into account the variable distance from camera to ground and ground irregularities for in-field measurements. Multispectral stereoscopic images were taken over a period o...

  8. Geometrically Consistent Stereoscopic Image Editing Using Patch-Based Synthesis.

    Science.gov (United States)

    Luo, Sheng-Jie; Sun, Ying-Tse; Shen, I-Chao; Chen, Bing-Yu; Chuang, Yung-Yu

    2015-01-01

    This paper presents a patch-based synthesis framework for stereoscopic image editing. The core of the proposed method builds upon a patch-based optimization framework with two key contributions: First, we introduce a depth-dependent patch-pair similarity measure for distinguishing and better utilizing image contents with different depth structures. Second, a joint patch-pair search is proposed for properly handling the correlation between two views. The proposed method successfully overcomes two main challenges of editing stereoscopic 3D media: (1) maintaining the depth interpretation, and (2) providing controllability of the scene depth. The method offers patch-based solutions to a wide variety of stereoscopic image editing problems, including depth-guided texture synthesis, stereoscopic NPR, paint by depth, content adaptation, and 2D to 3D conversion. Several challenging cases are demonstrated to show the effectiveness of the proposed method. The results of user studies also show that the proposed method produces stereoscopic images with good stereoscopics and visual quality.

  9. The role of binocular disparity in stereoscopic images of objects in the macaque anterior intraparietal area.

    Directory of Open Access Journals (Sweden)

    Maria C Romero

    Full Text Available Neurons in the macaque Anterior Intraparietal area (AIP encode depth structure in random-dot stimuli defined by gradients of binocular disparity, but the importance of binocular disparity in real-world objects for AIP neurons is unknown. We investigated the effect of binocular disparity on the responses of AIP neurons to images of real-world objects during passive fixation. We presented stereoscopic images of natural and man-made objects in which the disparity information was congruent or incongruent with disparity gradients present in the real-world objects, and images of the same objects where such gradients were absent. Although more than half of the AIP neurons were significantly affected by binocular disparity, the great majority of AIP neurons remained image selective even in the absence of binocular disparity. AIP neurons tended to prefer stimuli in which the depth information derived from binocular disparity was congruent with the depth information signaled by monocular depth cues, indicating that these monocular depth cues have an influence upon AIP neurons. Finally, in contrast to neurons in the inferior temporal cortex, AIP neurons do not represent images of objects in terms of categories such as animate-inanimate, but utilize representations based upon simple shape features including aspect ratio.

  10. Interactive 2D to 3D stereoscopic image synthesis

    Science.gov (United States)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  11. 3D-MAD: A Full Reference Stereoscopic Image Quality Estimator Based on Binocular Lightness and Contrast Perception.

    Science.gov (United States)

    Zhang, Yi; Chandler, Damon M

    2015-11-01

    Algorithms for a stereoscopic image quality assessment (IQA) aim to estimate the qualities of 3D images in a manner that agrees with human judgments. The modern stereoscopic IQA algorithms often apply 2D IQA algorithms on stereoscopic views, disparity maps, and/or cyclopean images, to yield an overall quality estimate based on the properties of the human visual system. This paper presents an extension of our previous 2D most apparent distortion (MAD) algorithm to a 3D version (3D-MAD) to evaluate 3D image quality. The 3D-MAD operates via two main stages, which estimate perceived quality degradation due to 1) distortion of the monocular views and 2) distortion of the cyclopean view. In the first stage, the conventional MAD algorithm is applied on the two monocular views, and then the combined binocular quality is estimated via a weighted sum of the two estimates, where the weights are determined based on a block-based contrast measure. In the second stage, intermediate maps corresponding to the lightness distance and the pixel-based contrast are generated based on a multipathway contrast gain-control model. Then, the cyclopean view quality is estimated by measuring the statistical-difference-based features obtained from the reference stereopair and the distorted stereopair, respectively. Finally, the estimates obtained from the two stages are combined to yield an overall quality score of the stereoscopic image. Tests on various 3D image quality databases demonstrate that our algorithm significantly improves upon many other state-of-the-art 2D/3D IQA algorithms.

  12. Vergence and accommodation to multiple-image-plane stereoscopic displays: 'Real world' responses with practical image-plane separations?

    Science.gov (United States)

    MacKenzie, K. J.; Dickson, R. A.; Watt, S. J.

    2011-03-01

    Conventional stereoscopic displays present images on a single focal plane. The resulting mismatch between the stimuli to the eyes' focusing response (accommodation) and to convergence causes fatigue and poor stereo performance. One promising solution is to distribute image intensity across a number of relatively widely spaced image planes - a technique referred to as depth filtering. Previously, we found this elicits accurate, continuous monocular accommodation responses with image-plane separations as large as 1.1 Diopters, suggesting that a relatively small (i.e. practical) number of image planes is sufficient to eliminate vergence-accommodation conflicts over a large range of simulated distances. However, accommodation responses have been found to overshoot systematically when the same stimuli are viewed binocularly. Here, we examined the minimum image-plane spacing required for accurate accommodation to binocular depth-filtered images. We compared accommodation and vergence responses to step changes in depth for depth-filtered stimuli, using image-plane separations of 0.6-1.2 D, and equivalent real stimuli. Accommodation responses to real and depth-filtered stimuli were equivalent for image-plane separations of ~0.6-0.9 D, but inaccurate thereafter. We conclude that depth filtering can be used to precisely match accommodation and vergence demand in a practical stereoscopic display, using a relatively small number of image planes.

  13. Stereoscopic interpretation of low-dose breast tomosynthesis projection images.

    Science.gov (United States)

    Muralidhar, Gautam S; Markey, Mia K; Bovik, Alan C; Haygood, Tamara Miner; Stephens, Tanya W; Geiser, William R; Garg, Naveen; Adrada, Beatriz E; Dogan, Basak E; Carkaci, Selin; Khisty, Raunak; Whitman, Gary J

    2014-04-01

    The purpose of this study was to evaluate stereoscopic perception of low-dose breast tomosynthesis projection images. In this Institutional Review Board exempt study, craniocaudal breast tomosynthesis cases (N = 47), consisting of 23 biopsy-proven malignant mass cases and 24 normal cases, were retrospectively reviewed. A stereoscopic pair comprised of two projection images that were ±4° apart from the zero angle projection was displayed on a Planar PL2010M stereoscopic display (Planar Systems, Inc., Beaverton, OR, USA). An experienced breast imager verified the truth for each case stereoscopically. A two-phase blinded observer study was conducted. In the first phase, two experienced breast imagers rated their ability to perceive 3D information using a scale of 1-3 and described the most suspicious lesion using the BI-RADS® descriptors. In the second phase, four experienced breast imagers were asked to make a binary decision on whether they saw a mass for which they would initiate a diagnostic workup or not and also report the location of the mass and provide a confidence score in the range of 0-100. The sensitivity and the specificity of the lesion detection task were evaluated. The results from our study suggest that radiologists who can perceive stereo can reliably interpret breast tomosynthesis projection images using stereoscopic viewing.

  14. Content- and disparity-adaptive stereoscopic image retargeting

    Science.gov (United States)

    Yan, Weiqing; Hou, Chunping; Zhou, Yuan; Xiang, Wei

    2016-02-01

    The paper proposes a content- and disparity-adaptive stereoscopic image retargeting. To simultaneously avoid the saliency content and disparity distortion, firstly, we calculate the image saliency region distortion difference, and conclude the factors causing visual distortion. Then, the proposed method via a convex quadratic programming can simultaneously avoid the distortion of the salient region and adjust disparity to a target area, by considering the relationship of the scaling factor of salient region and the disparity scaling factor. The experimental results show that the proposed method is able to successfully adapt the image disparity to the target display screen, while the salient objects remain undistorted in the retargeted stereoscopic image.

  15. An effective algorithm for monocular video to stereoscopic video transformation based on three-way Iuminance correction%一种基于三阶亮度校正的平面视频转立体视频快速算法

    Institute of Scientific and Technical Information of China (English)

    郑越; 杨淑莹

    2012-01-01

    This paper presents a new effective algorithm for monocular video stereoscopically transformation. With this algo-rithm, the monocular video can be transformed into stereoscopic format in nearly real time, and the output stream can be shown with lifelike three - dimensional effect on any supported display device. The core idea of this algorithm is to extract images from original monocular video, transform the images into stereoscopic ones according to Gaussian distribution, then build a three - level weighted average brightness map from the generated stereoscopic image sequences, correct the image regions respectively in all three level, and finally compose the complete three-dimensional video. After replacing the traditional time - consuming depth image generation algorithm with this one, the transformation performance obtains significantly improvement. Now the images with three - dimensional stereoscopic effect can be shown in real time during the original monocular video live broadcasts.%本文提出了一种新的平面视频转立体视频的快速算法.这种算法能够实时的将平面视频转换成立体视频,并能在三维显示设备上呈现出逼真的立体效果.首先将原始平面视频中的图像按照高斯分布进行立体变换,然后将视频中的图像序列生成加权平均亮度图,并将亮度分为3个等级,分别对这3个等级区域中的图像进行立体校正,最终得到完整的立体视频.我们的方法替代了传统方法中,生成深度图像的步骤,从而大大的提升了运算的速度,能够在原始平面视频的实时播放过程中,直接输出带有立体效果的画面.

  16. Human Pose Estimation from Monocular Images: A Comprehensive Survey

    Directory of Open Access Journals (Sweden)

    Wenjuan Gong

    2016-11-01

    Full Text Available Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing. Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.

  17. Deformable Surface 3D Reconstruction from Monocular Images

    CERN Document Server

    Salzmann, Matthieu

    2010-01-01

    Being able to recover the shape of 3D deformable surfaces from a single video stream would make it possible to field reconstruction systems that run on widely available hardware without requiring specialized devices. However, because many different 3D shapes can have virtually the same projection, such monocular shape recovery is inherently ambiguous. In this survey, we will review the two main classes of techniques that have proved most effective so far: The template-based methods that rely on establishing correspondences with a reference image in which the shape is already known, and non-rig

  18. Computational observers and visualization methods for stereoscopic medical imaging.

    Science.gov (United States)

    Zafar, Fahad; Yesha, Yaacov; Badano, Aldo

    2014-09-22

    As stereoscopic display devices become common, their image quality assessment evaluation becomes increasingly important. Most studies conducted on 3D displays are based on psychophysics experiments with humans rating their experience based on detection tasks. The physical measurements do not map to effects on signal detection performance. Additionally, human observer study results are often subjective and difficult to generalize. We designed a computational stereoscopic observer approach inspired by the mechanisms of stereopsis in human vision for task-based image assessment that makes binary decisions based on a set of image pairs. The stereo-observer is constrained to a left and a right image generated using a visualization operator to render voxel datasets. We analyze white noise and lumpy backgrounds using volume rendering techniques. Our simulation framework generalizes many different types of model observers including existing 2D and 3D observers as well as providing flexibility to formulate a stereo model observer approach following the principles of stereoscopic viewing. This methodology has the potential to replace human observer studies when exploring issues with stereo display devices to be used in medical imaging. We show results quantifying the changes in performance when varying stereo angle as measured by an ideal linear stereoscopic observer. Our findings indicate that there is an increase in performance of about 13-18% for white noise and 20-46% for lumpy backgrounds, where the stereo angle is varied from 0 to 30. The applicability of this observer extends to stereoscopic displays used for in the areas of medical and entertainment imaging applications.

  19. Stereoscopic high-speed imaging using additive colors

    Science.gov (United States)

    Sankin, Georgy N.; Piech, David; Zhong, Pei

    2012-04-01

    An experimental system for digital stereoscopic imaging produced by using a high-speed color camera is described. Two bright-field image projections of a three-dimensional object are captured utilizing additive-color backlighting (blue and red). The two images are simultaneously combined on a two-dimensional image sensor using a set of dichromatic mirrors, and stored for off-line separation of each projection. This method has been demonstrated in analyzing cavitation bubble dynamics near boundaries. This technique may be useful for flow visualization and in machine vision applications.

  20. Light-weight monocular display unit for 3D display using polypyrrole film actuator

    Science.gov (United States)

    Sakamoto, Kunio; Ohmori, Koji

    2010-10-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator system. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a polypyrrole linear actuator.

  1. 3-D Target Location from Stereoscopic SAR Images

    Energy Technology Data Exchange (ETDEWEB)

    DOERRY,ARMIN W.

    1999-10-01

    SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

  2. Parallax scanning methods for stereoscopic three-dimensional imaging

    Science.gov (United States)

    Mayhew, Christopher A.; Mayhew, Craig M.

    2012-03-01

    Under certain circumstances, conventional stereoscopic imagery is subject to being misinterpreted. Stereo perception created from two static horizontally separated views can create a "cut out" 2D appearance for objects at various planes of depth. The subject volume looks three-dimensional, but the objects themselves appear flat. This is especially true if the images are captured using small disparities. One potential explanation for this effect is that, although three-dimensional perception comes primarily from binocular vision, a human's gaze (the direction and orientation of a person's eyes with respect to their environment) and head motion also contribute additional sub-process information. The absence of this information may be the reason that certain stereoscopic imagery appears "odd" and unrealistic. Another contributing factor may be the absence of vertical disparity information in a traditional stereoscopy display. Recently, Parallax Scanning technologies have been introduced, which provide (1) a scanning methodology, (2) incorporate vertical disparity, and (3) produce stereo images with substantially smaller disparities than the human interocular distances.1 To test whether these three features would improve the realism and reduce the cardboard cutout effect of stereo images, we have applied Parallax Scanning (PS) technologies to commercial stereoscopic digital cinema productions and have tested the results with a panel of stereo experts. These informal experiments show that the addition of PS information into the left and right image capture improves the overall perception of three-dimensionality for most viewers. Parallax scanning significantly increases the set of tools available for 3D storytelling while at the same time presenting imagery that is easy and pleasant to view.

  3. A single-imager stereoscopic endoscope

    Science.gov (United States)

    Keller, Kurtis; State, Andrei

    2011-03-01

    We have developed a 5.5mm and 10mm dual optical channel laparoscope that combines both exit channels into a single, standard, endoscopic eye cup which attaches directly to a single, conventional HD camera head. We have also developed image processing software that auto-calibrates, aligns, enhances and processes the image so that it can be displayed on a stereo/3D display to achieve a true 3D effect. The advantages to the end user for such a 3D system are that they do not have to purchase a new camera system, all of their existing scopes are still available to use, as are all integrated OR features. They will be able to add 3D capability to current HD system by purchasing only stereo scopes and a small video processing computer box and adding a 2D/3D HD capable monitor.

  4. XPIV-Multi-plane stereoscopic particle image velocimetry

    Science.gov (United States)

    Liberzon, A.; Gurka, R.; Hetsroni, G.

    We introduce the three-dimensional measurement technique (XPIV) based on a Particle Image Velocimetry (PIV) system. The technique provides three-dimensional and statistically significant velocity data. The main principle of the technique lies in the combination of defocus, stereoscopic and multi-plane illumination concepts. Preliminary results of the turbulent boundary layer in a flume are presented. The quality of the velocity data is evaluated by using the velocity profiles and relative turbulent intensity of the boundary layer. The analysis indicates that the XPIV is a reliable experimental tool for three-dimensional fluid velocity measurements.

  5. XPIV-Multi-plane stereoscopic particle image velocimetry

    Energy Technology Data Exchange (ETDEWEB)

    Liberzon, A. [Multiphase Flow Laboratory, Faculty of Mechanical Engineering, Technion-IIT, 32000, Haifa (Israel); Institute of Hydromechanics and Water Resources Management, ETH, Zurich (Switzerland); Gurka, R. [Multiphase Flow Laboratory, Faculty of Mechanical Engineering, Technion-IIT, 32000, Haifa (Israel); Department of Mechanical Engineering, The Johns Hopkins University, Baltimore, MD (United States); Hetsroni, G. [Multiphase Flow Laboratory, Faculty of Mechanical Engineering, Technion-IIT, 32000, Haifa (Israel)

    2004-02-01

    We introduce the three-dimensional measurement technique (XPIV) based on a Particle Image Velocimetry (PIV) system. The technique provides three-dimensional and statistically significant velocity data. The main principle of the technique lies in the combination of defocus, stereoscopic and multi-plane illumination concepts. Preliminary results of the turbulent boundary layer in a flume are presented. The quality of the velocity data is evaluated by using the velocity profiles and relative turbulent intensity of the boundary layer. The analysis indicates that the XPIV is a reliable experimental tool for three-dimensional fluid velocity measurements. (orig.)

  6. Kafka's Stereoscopes

    DEFF Research Database (Denmark)

    Holm, Isak Winkel

    In 1911, in the provincial town of Friedland, Franz Kafka encountered the Kaiserpanorama: a stereoscopic peep show offering an illusion of three-dimensional depth. After the experience, he began to create literary passages that emulate the binocular set-up of the stereoscope in the juxtaposition...... of two images of the same object seen from slightly different perspectives. Kafka’s stereoscopes, as I suggest calling these passages, are crucial to an understanding of the relation between literature and the political in his work. The book sets out to map the political function of the stereoscopic...... style by proposing three theses concerning the content, form and function of the literary stereoscopes. At the level of content, Kafka’s stereoscopes offer a representation of the configuration of a specific community. At the level of form, his stereoscopes are structured as the juxtaposition of two...

  7. Wide-Field-of-View, High-Resolution, Stereoscopic Imager

    Science.gov (United States)

    Prechtl, Eric F.; Sedwick, Raymond J.

    2010-01-01

    A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.

  8. Visual perception and stereoscopic imaging: an artist's perspective

    Science.gov (United States)

    Mason, Steve

    2015-03-01

    This paper continues my 2014 February IS and T/SPIE Convention exploration into the relationship of stereoscopic vision and consciousness (90141F-1). It was proposed then that by using stereoscopic imaging people may consciously experience, or see, what they are viewing and thereby help make them more aware of the way their brains manage and interpret visual information. Environmental imaging was suggested as a way to accomplish this. This paper is the result of further investigation, research, and follow-up imaging. A show of images, that is a result of this research, allows viewers to experience for themselves the effects of stereoscopy on consciousness. Creating dye-infused aluminum prints while employing ChromaDepth® 3D glasses, I hope to not only raise awareness of visual processing but also explore the differences and similarities between the artist and scientist―art increases right brain spatial consciousness, not only empirical thinking, while furthering the viewer's cognizance of the process of seeing. The artist must abandon preconceptions and expectations, despite what the evidence and experience may indicate in order to see what is happening in his work and to allow it to develop in ways he/she could never anticipate. This process is then revealed to the viewer in a show of work. It is in the experiencing, not just from the thinking, where insight is achieved. Directing the viewer's awareness during the experience using stereoscopic imaging allows for further understanding of the brain's function in the visual process. A cognitive transformation occurs, the preverbal "left/right brain shift," in order for viewers to "see" the space. Using what we know from recent brain research, these images will draw from certain parts of the brain when viewed in two dimensions and different ones when viewed stereoscopically, a shift, if one is looking for it, which is quite noticeable. People who have experienced these images in the context of examining their own

  9. The precision of binocular and monocular depth judgments in natural settings.

    Science.gov (United States)

    McKee, Suzanne P; Taylor, Douglas G

    2010-08-01

    We measured binocular and monocular depth thresholds for objects presented in a real environment. Observers judged the depth separating a pair of metal rods presented either in relative isolation, or surrounded by other objects, including a textured surface. In the isolated setting, binocular thresholds were greatly superior to the monocular thresholds by as much as a factor of 18. The presence of adjacent objects and textures improved the monocular thresholds somewhat, but the superiority of binocular viewing remained substantial (roughly a factor of 10). To determine whether motion parallax would improve monocular sensitivity for the textured setting, we asked observers to move their heads laterally, so that the viewing eye was displaced by 8-10 cm; this motion produced little improvement in the monocular thresholds. We also compared disparity thresholds measured with the real rods to thresholds measured with virtual images in a standard mirror stereoscope. Surprisingly, for the two naive observers, the stereoscope thresholds were far worse than the thresholds for the real rods-a finding that indicates that stereoscope measurements for unpracticed observers should be treated with caution. With practice, the stereoscope thresholds for one observer improved to almost the precision of the thresholds for the real rods.

  10. Single-channel stereoscopic video imaging modality based on transparent rotating deflector.

    Science.gov (United States)

    Radfar, Edalat; Jang, Won Hyuk; Freidoony, Leila; Park, Jihoon; Kwon, Kichul; Jung, Byungjo

    2015-10-19

    In this study, we developed a single-channel stereoscopic video imaging modality based on a transparent rotating deflector (TRD). Sequential two-dimensional (2D) left and right images were obtained through the TRD synchronized with a camera, and the components of the imaging modality were controlled by a microcontroller unit. The imaging modality was characterized by evaluating the stereoscopic video image generation, rotation of the TRD, heat generation by the stepping motor, and image quality and its stability in terms of the structural similarity index. The degree of depth perception was estimated and subjective analysis was performed to evaluate the depth perception improvement. The results show that the single-channel stereoscopic video imaging modality may: 1) overcome some limitations of conventional stereoscopic video imaging modalities; 2) be a potential economical compact stereoscopic imaging modality if the system components can be miniaturized; 3) be easily integrated into current 2D optical imaging modalities to produce a stereoscopic image; and 4) be applied to various medical and industrial fields.

  11. Monoplane Stereoscopic Imaging Method for Inverse Geometry X-ray Fluoroscopy.

    Science.gov (United States)

    Tomkowiak, Michael T; Van Lysel, Michael S; Speidel, Michael A

    2013-03-13

    Scanning Beam Digital X-ray (SBDX) is a low-dose inverse geometry fluoroscopic system for cardiac interventional procedures. The system performs x-ray tomosynthesis at multiple planes in each frame period and combines the tomosynthetic images into a projection-like composite image for fluoroscopic display. We present a novel method of stereoscopic imaging using SBDX, in which two slightly offset projection-like images are reconstructed from the same scan data by utilizing raw data from two different detector regions. To confirm the accuracy of the 3D information contained in the stereoscopic projections, a phantom of known geometry containing high contrast steel spheres was imaged, and the spheres were localized in 3D using a previously described stereoscopic localization method. After registering the localized spheres to the phantom geometry, the 3D residual RMS errors were between 0.81 and 1.93 mm, depending on the stereoscopic geometry. To demonstrate visualization capabilities, a cardiac RF ablation catheter was imaged with the tip oriented towards the detector. When viewed as a stereoscopic red/cyan anaglyph, the true orientation (towards vs. away) could be resolved, whereas the device orientation was ambiguous in conventional 2D projection images. This stereoscopic imaging method could be implemented in real time to provide live 3D visualization and device guidance for cardiovascular interventions using a single gantry and data acquired through normal, low-dose SBDX imaging.

  12. Tele-transmission of stereoscopic images of the optic nerve head in glaucoma via Internet.

    Science.gov (United States)

    Bergua, Antonio; Mardin, Christian Y; Horn, Folkert K

    2009-06-01

    The objective was to describe an inexpensive system to visualize stereoscopic photographs of the optic nerve head on computer displays and to transmit such images via the Internet for collaborative research or remote clinical diagnosis in glaucoma. Stereoscopic images of glaucoma patients were digitized and stored in a file format (joint photographic stereoimage [jps]) containing all three-dimensional information for both eyes on an Internet Web site (www.trizax.com). The size of jps files was between 0.4 to 1.4 MB (corresponding to a diagonal stereo image size between 900 and 1400 pixels) suitable for Internet protocols. A conventional personal computer system equipped with wireless stereoscopic LCD shutter glasses and a CRT-monitor with high refresh rate (120 Hz) can be used to obtain flicker-free stereo visualization of true-color images with high resolution. Modern thin-film transistor-LCD displays in combination with inexpensive red-cyan goggles achieve stereoscopic visualization with the same resolution but reduced color quality and contrast. The primary aim of our study was met to transmit stereoscopic images via the Internet. Additionally, we found that with both stereoscopic visualization techniques, cup depth, neuroretinal rim shape, and slope of the inner wall of the optic nerve head, can be qualitatively better perceived and interpreted than with monoscopic images. This study demonstrates high-quality and low-cost Internet transmission of stereoscopic images of the optic nerve head from glaucoma patients. The technique allows exchange of stereoscopic images and can be applied to tele-diagnostic and glaucoma research.

  13. Stereoscopic depth of field: why we can easily perceive and distinguish the depth of neighboring objects under binocular condition than monocular

    Science.gov (United States)

    Lee, Kwang-Hoon; Park, Min-Chul

    2016-06-01

    In this paper, we introduce a high efficient and practical disparity estimation using hierarchical bilateral filtering for realtime view synthesis. The proposed method is based on hierarchical stereo matching with hardware-efficient bilateral filtering. Hardware-efficient bilateral filtering is different from the exact bilateral filter. The purpose of the method is to design an edge-preserving filter that can be efficiently parallelized on hardware. The proposed hierarchical bilateral filtering based disparity estimation is essentially a coarse-to-fine use of stereo matching with bilateral filtering. It works as follows: firstly, the hierarchical image pyramid are constructed; the multi-scale algorithm then starts by applying a local stereo matching to the downsampled images at the coarsest level of the hierarchy. After the local stereo matching, the estimated disparity map is refined with the bilateral filtering. And then the refined disparity map will be adaptively upsampled to the next finer level. The upsampled disparity map used as a prior of the corresponding local stereo matching at the next level, and filtered and so on. The method we propose is essentially a combination of hierarchical stereo matching and hardware-efficient bilateral filtering. As a result, visual comparison using real-world stereoscopic video clips shows that the method gives better results than one of state-of-art methods in terms of robustness and computation time.

  14. Eyegaze Detection from Monocular Camera Image for Eyegaze Communication System

    Science.gov (United States)

    Ohtera, Ryo; Horiuchi, Takahiko; Kotera, Hiroaki

    An eyegaze interface is one of the key technologies as an input device in the ubiquitous-computing society. In particular, an eyegaze communication system is very important and useful for severely handicapped users such as quadriplegic patients. Most of the conventional eyegaze tracking algorithms require specific light sources, equipment and devices. In this study, a simple eyegaze detection algorithm is proposed using a single monocular video camera. The proposed algorithm works under the condition of fixed head pose, but slight movement of the face is accepted. In our system, we assume that all users have the same eyeball size based on physiological eyeball models. However, we succeed to calibrate the physiologic movement of the eyeball center depending on the gazing direction by approximating it as a change in the eyeball radius. In the gaze detection stage, the iris is extracted from a captured face frame by using the Hough transform. Then, the eyegaze angle is derived by calculating the Euclidean distance of the iris centers between the extracted frame and a reference frame captured in the calibration process. We apply our system to an eyegaze communication interface, and verified the performance through key typing experiments with a visual keyboard on display.

  15. A novel stereoscopic projection display system for CT images of fractures.

    Science.gov (United States)

    Liu, Xiujuan; Jiang, Hong; Lang, Yuedong; Wang, Hongbo; Sun, Na

    2013-06-01

    The present study proposed a novel projection display system based on a virtual reality enhancement environment. The proposed system displays stereoscopic images of fractures and enhances the computed tomography (CT) images. The diagnosis and treatment of fractures primarily depend on the post-processing of CT images. However, two-dimensional (2D) images do not show overlapping structures in fractures since they are displayed without visual depth and these structures are too small to be simultaneously observed by a group of clinicians. Stereoscopic displays may solve this problem and allow clinicians to obtain more information from CT images. Hardware with which to generate stereoscopic images was designed. This system utilized the conventional equipment found in meeting rooms. The off-axis algorithm was adopted to convert the CT images into stereo image pairs, which were used as the input for a stereo generator. The final stereoscopic images were displayed using a projection system. Several CT fracture images were imported into the system for comparison with traditional 2D CT images. The results showed that the proposed system aids clinicians in group discussions by producing large stereoscopic images. The results demonstrated that the enhanced stereoscopic CT images generated by the system appear clearer and smoother, such that the sizes, displacement and shapes of bone fragments are easier to assess. Certain fractures that were previously not visible on 2D CT images due to vision overlap became vividly evident in the stereo images. The proposed projection display system efficiently, economically and accurately displayed three-dimensional (3D) CT images. The system may help clinicians improve the diagnosis and treatment of fractures.

  16. Software-Assisted Depth Analysis of Optic Nerve Stereoscopic Images in Telemedicine.

    Science.gov (United States)

    Xia, Tian; Patel, Shriji N; Szirth, Ben C; Kolomeyer, Anton M; Khouri, Albert S

    2016-01-01

    Background. Software guided optic nerve assessment can assist in process automation and reduce interobserver disagreement. We tested depth analysis software (DAS) in assessing optic nerve cup-to-disc ratio (VCD) from stereoscopic optic nerve images (SONI) of normal eyes. Methods. In a prospective study, simultaneous SONI from normal subjects were collected during telemedicine screenings using a Kowa 3Wx nonmydriatic simultaneous stereoscopic retinal camera (Tokyo, Japan). VCD was determined from SONI pairs and proprietary pixel DAS (Kowa Inc., Tokyo, Japan) after disc and cup contour line placement. A nonstereoscopic VCD was determined using the right channel of a stereo pair. Mean, standard deviation, t-test, and the intraclass correlation coefficient (ICCC) were calculated. Results. 32 patients had mean age of 40 ± 14 years. Mean VCD on SONI was 0.36 ± 0.09, with DAS 0.38 ± 0.08, and with nonstereoscopic 0.29 ± 0.12. The difference between stereoscopic and DAS assisted was not significant (p = 0.45). ICCC showed agreement between stereoscopic and software VCD assessment. Mean VCD difference was significant between nonstereoscopic and stereoscopic (p stereoscopic VCD.

  17. Software-Assisted Depth Analysis of Optic Nerve Stereoscopic Images in Telemedicine

    Directory of Open Access Journals (Sweden)

    Tian Xia

    2016-01-01

    Full Text Available Background. Software guided optic nerve assessment can assist in process automation and reduce interobserver disagreement. We tested depth analysis software (DAS in assessing optic nerve cup-to-disc ratio (VCD from stereoscopic optic nerve images (SONI of normal eyes. Methods. In a prospective study, simultaneous SONI from normal subjects were collected during telemedicine screenings using a Kowa 3Wx nonmydriatic simultaneous stereoscopic retinal camera (Tokyo, Japan. VCD was determined from SONI pairs and proprietary pixel DAS (Kowa Inc., Tokyo, Japan after disc and cup contour line placement. A nonstereoscopic VCD was determined using the right channel of a stereo pair. Mean, standard deviation, t-test, and the intraclass correlation coefficient (ICCC were calculated. Results. 32 patients had mean age of 40±14 years. Mean VCD on SONI was 0.36±0.09, with DAS 0.38±0.08, and with nonstereoscopic 0.29±0.12. The difference between stereoscopic and DAS assisted was not significant (p=0.45. ICCC showed agreement between stereoscopic and software VCD assessment. Mean VCD difference was significant between nonstereoscopic and stereoscopic (p<0.05 and nonstereoscopic and DAS (p<0.005 recordings. Conclusions. DAS successfully assessed SONI and showed a high degree of correlation to physician-determined stereoscopic VCD.

  18. Zerotree-based stereoscopic video CODEC

    Science.gov (United States)

    Thanapirom, S.; Fernando, W. A. C.; Edirisinghe, Eran A.

    2005-07-01

    Due to the provision of a more natural representation of a scene in the form of left and right eye views, a stereoscopic imaging system provides a more effective method for image/video display. Unfortunately the vast amount of information that must be transmitted/stored to represent a stereo image pair/video sequence, has so far hindered its use in commercial applications. However, by properly exploiting the spatial, temporal and binocular redundancy, a stereo image pair or a sequence could be compressed and transmitted through a single monocular channel's bandwidth without unduly sacrificing the perceived stereoscopic image quality. We propose a timely and novel framework to transmit stereoscopic data efficiently. We propose a timely and novel framework to transmit stereoscopic data efficiently. We present a new technique for coding stereo video sequences based on discrete wavelet transform (DWT) technology. The proposed technique particularly exploits zerotree entropy (ZTE) coding that makes use of the wavelet block concept to achieve low bit rate stereo video coding. One of the two image streams, namely, the main stream, is independently coded by a zerotree video CODEC, while the second stream, namely, the auxiliary stream, is predicted based on disparity compensation. A zerotree video CODEC subsequently codes the residual stream. We compare the performance of the proposed CODEC with a discrete cosine transform (DCT)-based, modified MPEG-2 stereo video CODEC. We show that the proposed CODEC outperforms the benchmark CODEC in coding both main and auxiliary streams.

  19. Synthetic phase holograms for auto-stereoscopic image displays using a modified IFTA

    Science.gov (United States)

    Choi, Kyongsik; Kim, Hwi; Lee, Byoungho

    2004-05-01

    A Fourier-transformed synthetic phase hologram for an auto-stereoscopic image display system is proposed and implemented. The system uses a phase-only spatial light modulator and a simple projection lens module. A modified iterative Fresnel transform algorithm method, for the reconstruction of gray-level quantized stereo images with fast convergence, high diffraction efficiency and large signal-to-noise ratio is also described. Using this method, it is possible to obtain a high diffraction efficiency(~90%), an excellent signal-to-noise ratio(> 9.6dB), and a short calculation time(~3min). Experimentally, the proposed auto-stereoscopic display system was able to generate stereoscopic 3D images very well.

  20. Cameras a Million Miles Apart: Stereoscopic Imaging Potential with the Hubble and James Webb Space Telescopes

    CERN Document Server

    Green, Joel D; Stansberry, John A; Meinke, Bonnie

    2016-01-01

    The two most powerful optical/IR telescopes in history -- NASA's Hubble and James Webb Space Telescopes -- will be in space at the same time. We have a unique opportunity to leverage the 1.5 million kilometer separation between the two telescopic nodal points to obtain simultaneously captured stereoscopic images of asteroids, comets, moons and planets in our Solar System. Given the recent resurgence in stereo-3D movies and the recent emergence of VR-enabled mobile devices, these stereoscopic images provide a unique opportunity to engage the public with unprecedented views of various Solar System objects. Here, we present the technical requirements for acquiring stereoscopic images of Solar System objects, given the constraints of the telescopic equipment and the orbits of the target objects, and we present a handful of examples.

  1. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    Science.gov (United States)

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.

  2. A 3D reconstruction from real-time stereoscopic images using GPU

    OpenAIRE

    Gomez-Balderas, Jose-Ernesto; Houzet, Dominique

    2013-01-01

    IEEE Xplore Compliant Files 979-10-92279-01-6; International audience; In this article we propose a new technique to obtain a three-dimensional (3D) reconstruction from stereoscopic images taken by a stereoscopic system in real-time. To parallelize the 3D reconstruction we propose a method that uses a Graphics Processors Unit (GPU) and a disparity map from block matching algorithm (BM). The results obtained permit us to accelerate the images processing time, measured in frames per second (FPS...

  3. Stereoscopic visualization of diffusion tensor imaging data: a comparative survey of visualization techniques.

    Science.gov (United States)

    Raslan, Osama; Debnam, James Matthew; Ketonen, Leena; Kumar, Ashok J; Schellingerhout, Dawid; Wang, Jihong

    2013-01-01

    Diffusion tensor imaging (DTI) data has traditionally been displayed as a grayscale functional anisotropy map (GSFM) or color coded orientation map (CCOM). These methods use black and white or color with intensity values to map the complex multidimensional DTI data to a two-dimensional image. Alternative visualization techniques, such as V max maps utilize enhanced graphical representation of the principal eigenvector by means of a headless arrow on regular nonstereoscopic (VM) or stereoscopic display (VMS). A survey of clinical utility of patients with intracranial neoplasms was carried out by 8 neuroradiologists using traditional and nontraditional methods of DTI display. Pairwise comparison studies of 5 intracranial neoplasms were performed with a structured questionnaire comparing GSFM, CCOM, VM, and VMS. Six of 8 neuroradiologists favored V max maps over traditional methods of display (GSFM and CCOM). When comparing the stereoscopic (VMS) and the non-stereoscopic (VM) modes, 4 favored VMS, 2 favored VM, and 2 had no preference. In conclusion, processing and visualizing DTI data stereoscopically is technically feasible. An initial survey of users indicated that V max based display methodology with or without stereoscopic visualization seems to be preferred over traditional methods to display DTI data.

  4. Stereoscopic Visualization of Diffusion Tensor Imaging Data: A Comparative Survey of Visualization Techniques

    Directory of Open Access Journals (Sweden)

    Osama Raslan

    2013-01-01

    Full Text Available Diffusion tensor imaging (DTI data has traditionally been displayed as a grayscale functional anisotropy map (GSFM or color coded orientation map (CCOM. These methods use black and white or color with intensity values to map the complex multidimensional DTI data to a two-dimensional image. Alternative visualization techniques, such as Vmax maps utilize enhanced graphical representation of the principal eigenvector by means of a headless arrow on regular nonstereoscopic (VM or stereoscopic display (VMS. A survey of clinical utility of patients with intracranial neoplasms was carried out by 8 neuroradiologists using traditional and nontraditional methods of DTI display. Pairwise comparison studies of 5 intracranial neoplasms were performed with a structured questionnaire comparing GSFM, CCOM, VM, and VMS. Six of 8 neuroradiologists favored Vmax maps over traditional methods of display (GSFM and CCOM. When comparing the stereoscopic (VMS and the non-stereoscopic (VM modes, 4 favored VMS, 2 favored VM, and 2 had no preference. In conclusion, processing and visualizing DTI data stereoscopically is technically feasible. An initial survey of users indicated that Vmax based display methodology with or without stereoscopic visualization seems to be preferred over traditional methods to display DTI data.

  5. Monocular depth perception using image processing and machine learning

    Science.gov (United States)

    Hombali, Apoorv; Gorde, Vaibhav; Deshpande, Abhishek

    2011-10-01

    This paper primarily exploits some of the more obscure, but inherent properties of camera and image to propose a simpler and more efficient way of perceiving depth. The proposed method involves the use of a single stationary camera at an unknown perspective and an unknown height to determine depth of an object on unknown terrain. In achieving so a direct correlation between a pixel in an image and the corresponding location in real space has to be formulated. First, a calibration step is undertaken whereby the equation of the plane visible in the field of view is calculated along with the relative distance between camera and plane by using a set of derived spatial geometrical relations coupled with a few intrinsic properties of the system. The depth of an unknown object is then perceived by first extracting the object under observation using a series of image processing steps followed by exploiting the aforementioned mapping of pixel and real space coordinate. The performance of the algorithm is greatly enhanced by the introduction of reinforced learning making the system independent of hardware and environment. Furthermore the depth calculation function is modified with a supervised learning algorithm giving consistent improvement in results. Thus, the system uses the experience in past and optimizes the current run successively. Using the above procedure a series of experiments and trials are carried out to prove the concept and its efficacy.

  6. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    Science.gov (United States)

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  7. The compressed average image intensity metric for stereoscopic video quality assessment

    Science.gov (United States)

    Wilczewski, Grzegorz

    2016-09-01

    The following article depicts insights towards design, creation and testing of a genuine metric designed for a 3DTV video quality evaluation. The Compressed Average Image Intensity (CAII) mechanism is based upon stereoscopic video content analysis, setting its core feature and functionality to serve as a versatile tool for an effective 3DTV service quality assessment. Being an objective type of quality metric it may be utilized as a reliable source of information about the actual performance of a given 3DTV system, under strict providers evaluation. Concerning testing and the overall performance analysis of the CAII metric, the following paper presents comprehensive study of results gathered across several testing routines among selected set of samples of stereoscopic video content. As a result, the designed method for stereoscopic video quality evaluation is investigated across the range of synthetic visual impairments injected into the original video stream.

  8. Automatic calculation of tree diameter from stereoscopic image pairs using digital image processing.

    Science.gov (United States)

    Yi, Faliu; Moon, Inkyu

    2012-06-20

    Automatic operations play an important role in societies by saving time and improving efficiency. In this paper, we apply the digital image processing method to the field of lumbering to automatically calculate tree diameters in order to reduce culler work and enable a third party to verify tree diameters. To calculate the cross-sectional diameter of a tree, the image was first segmented by the marker-controlled watershed transform algorithm based on the hue saturation intensity (HSI) color model. Then, the tree diameter was obtained by measuring the area of every isolated region in the segmented image. Finally, the true diameter was calculated by multiplying the diameter computed in the image and the scale, which was derived from the baseline and disparity of correspondence points from stereoscopic image pairs captured by rectified configuration cameras.

  9. Perceptual full-reference quality assessment of stereoscopic images by considering binocular visual characteristics.

    Science.gov (United States)

    Shao, Feng; Lin, Weisi; Gu, Shanbo; Jiang, Gangyi; Srikanthan, Thambipillai

    2013-05-01

    Perceptual quality assessment is a challenging issue in 3D signal processing research. It is important to study 3D signal directly instead of studying simple extension of the 2D metrics directly to the 3D case as in some previous studies. In this paper, we propose a new perceptual full-reference quality assessment metric of stereoscopic images by considering the binocular visual characteristics. The major technical contribution of this paper is that the binocular perception and combination properties are considered in quality assessment. To be more specific, we first perform left-right consistency checks and compare matching error between the corresponding pixels in binocular disparity calculation, and classify the stereoscopic images into non-corresponding, binocular fusion, and binocular suppression regions. Also, local phase and local amplitude maps are extracted from the original and distorted stereoscopic images as features in quality assessment. Then, each region is evaluated independently by considering its binocular perception property, and all evaluation results are integrated into an overall score. Besides, a binocular just noticeable difference model is used to reflect the visual sensitivity for the binocular fusion and suppression regions. Experimental results show that compared with the relevant existing metrics, the proposed metric can achieve higher consistency with subjective assessment of stereoscopic images.

  10. Flow analysis of vortex generators on wing sections by stereoscopic particle image velocimetry measurements

    DEFF Research Database (Denmark)

    Velte, Clara Marika; Hansen, Martin Otto Laver; Cavar, Dalibor

    2008-01-01

    Stereoscopic particle image velocimetry measurements have been executed in a low speed wind tunnel in spanwise planes in the flow past a row of vortex generators, mounted on a bump in a fashion producing counter-rotating vortices. The measurement technique is a powerful tool which provides all...

  11. Induction of Monocular Stereopsis by Altering Focus Distance: A Test of Ames's Hypothesis.

    Science.gov (United States)

    Vishwanath, Dhanraj

    2016-03-01

    Viewing a real three-dimensional scene or a stereoscopic image with both eyes generates a vivid phenomenal impression of depth known as stereopsis. Numerous reports have highlighted the fact that an impression of stereopsis can be induced in the absence of binocular disparity. A method claimed by Ames (1925) involved altering accommodative (focus) distance while monocularly viewing a picture. This claim was tested on naïve observers using a method inspired by the observations of Gogel and Ogle on the equidistance tendency. Consistent with Ames's claim, most observers reported that the focus manipulation induced an impression of stereopsis comparable to that obtained by monocular-aperture viewing.

  12. Quasi-stereoscopic imaging of the solar X-ray corona

    Science.gov (United States)

    Batchelor, David

    1994-01-01

    The first published three-dimensional images of the solar X-ray corona obtained by means of solar rotational parallax, are presented in stereographic form. Image pairs approximately 12 hours apart during times of stable coronal conditions were selected from the digitized images obtained with the Skylab X-ray Spectrographic Telescope. The image resolution limit is approximately 10 arc sec. Many coronal structures not visible in the separate images are clearly observed when the image pairs are viewed stereoscopically. This method gives a preview of the potential resources for solar research and forecasting of solar-geomagnetic interactions that could be provided by stereoscopic observations of the Sun using a small group of spacecraft. The method is also applicable to other X-ray, ultraviolet, or other wavebands in which the corona has extended, transparent structure.

  13. Stereoscopic Depth Perception during Binocular Rivalry.

    Science.gov (United States)

    Andrews, Timothy J; Holmes, David

    2011-01-01

    When we view nearby objects, we generate appreciably different retinal images in each eye. Despite this, the visual system can combine these different images to generate a unified view that is distinct from the perception generated from either eye alone (stereopsis). However, there are occasions when the images in the two eyes are too disparate to fuse. Instead, they alternate in perceptual dominance, with the image from one eye being completely excluded from awareness (binocular rivalry). It has been thought that binocular rivalry is the default outcome when binocular fusion is not possible. However, other studies have reported that stereopsis and binocular rivalry can coexist. The aim of this study was to address whether a monocular stimulus that is reported to be suppressed from awareness can continue to contribute to the perception of stereoscopic depth. Our results showed that stereoscopic depth perception was still evident when incompatible monocular images differing in spatial frequency, orientation, spatial phase, or direction of motion engage in binocular rivalry. These results demonstrate a range of conditions in which binocular rivalry and stereopsis can coexist.

  14. STEREOSCOPIC DEPTH PERCEPTION DURING BINOCULAR RIVALRY

    Directory of Open Access Journals (Sweden)

    Tim eAndrews

    2011-09-01

    Full Text Available When we view nearby objects, we generate appreciably different retinal images in each eye. Despite this, the visual system can combine these different images to generate a unified view that is distinct from the perception generated from either eye alone (stereopsis. However, there are occasions when the images in the two eyes are too disparate to fuse. Instead, they alternate in perceptual dominance, with the image from one eye being completely excluded from awareness (binocular rivalry. It has been thought that binocular rivalry is the default outcome when binocular fusion is not possible. However, other studies have reported that stereopsis and binocular rivalry can coexist. The aim of this study was to address whether a monocular stimulus that is reported to be suppressed from awareness can continue to contribute to the perception of stereoscopic depth. Our results showed that stereoscopic depth perception was still evident when incompatible monocular images differing in spatial frequency, orientation, spatial phase or direction-of-motion engage in binocular rivalry. These results support the idea that binocular rivalry and stereopsis can coexist.

  15. Stereoscopic Three-Dimensional Visualization Applied to Multimodal Brain Images: Clinical Applications and a Functional Connectivity Atlas.

    Directory of Open Access Journals (Sweden)

    Gonzalo M Rojas

    2014-11-01

    Full Text Available Effective visualization is central to the exploration and comprehension of brain imaging data. While MRI data are acquired in three-dimensional space, the methods for visualizing such data have rarely taken advantage of three-dimensional stereoscopic technologies. We present here results of stereoscopic visualization of clinical data, as well as an atlas of whole-brain functional connectivity. In comparison with traditional 3D rendering techniques, we demonstrate the utility of stereoscopic visualizations to provide an intuitive description of the exact location and the relative sizes of various brain landmarks, structures and lesions. In the case of resting state fMRI, stereoscopic 3D visualization facilitated comprehension of the anatomical position of complex large-scale functional connectivity patterns. Overall, stereoscopic visualization improves the intuitive visual comprehension of image contents, and brings increased dimensionality to visualization of traditional MRI data, as well as patterns of functional connectivity.

  16. Stereoscopic three-dimensional visualization applied to multimodal brain images: clinical applications and a functional connectivity atlas.

    Science.gov (United States)

    Rojas, Gonzalo M; Gálvez, Marcelo; Vega Potler, Natan; Craddock, R Cameron; Margulies, Daniel S; Castellanos, F Xavier; Milham, Michael P

    2014-01-01

    Effective visualization is central to the exploration and comprehension of brain imaging data. While MRI data are acquired in three-dimensional space, the methods for visualizing such data have rarely taken advantage of three-dimensional stereoscopic technologies. We present here results of stereoscopic visualization of clinical data, as well as an atlas of whole-brain functional connectivity. In comparison with traditional 3D rendering techniques, we demonstrate the utility of stereoscopic visualizations to provide an intuitive description of the exact location and the relative sizes of various brain landmarks, structures and lesions. In the case of resting state fMRI, stereoscopic 3D visualization facilitated comprehension of the anatomical position of complex large-scale functional connectivity patterns. Overall, stereoscopic visualization improves the intuitive visual comprehension of image contents, and brings increased dimensionality to visualization of traditional MRI data, as well as patterns of functional connectivity.

  17. Measuring perceived depth in natural images and study of its relation with monocular and binocular depth cues

    Science.gov (United States)

    Lebreton, Pierre; Raake, Alexander; Barkowsky, Marcus; Le Callet, Patrick

    2014-03-01

    The perception of depth in images and video sequences is based on different depth cues. Studies have considered depth perception threshold as a function of viewing distance (Cutting and Vishton, 1995), the combination of different monocular depth cues and their quantitative relation with binocular depth cues and their different possible type of interactions (Landy, l995). But these studies only consider artificial stimuli and none of them attempts to provide a quantitative contribution of monocular and binocular depth cues compared to each other in the specific context of natural images. This study targets this particular application case. The evaluation of the strength of different depth cues compared to each other using a carefully designed image database to cover as much as possible different combinations of monocular (linear perspective, texture gradient, relative size and defocus blur) and binocular depth cues. The 200 images were evaluated in two distinct subjective experiments to evaluate separately perceived depth and different monocular depth cues. The methodology and the description of the definition of the different scales will be detailed. The image database (DC3Dimg) is also released for the scientific community.

  18. Extracting hand articulations from monocular depth images using curvature scale space descriptors

    Institute of Scientific and Technical Information of China (English)

    Shao-fan WANG[1; Chun LI[1; De-hui KONG[1; Bao-cai YIN[2,1,3

    2016-01-01

    We propose a framework of hand articulation detection from a monocular depth image using curvature scale space (CSS) descriptors. We extract the hand contour from an input depth image, and obtain the fingertips and finger-valleys of the contour using the local extrema of a modified CSS map of the contour. Then we recover the undetected fingertips according to the local change of depths of points in the interior of the contour. Compared with traditional appearance-based approaches using either angle detectors or convex hull detectors, the modified CSS descriptor extracts the fingertips and finger-valleys more precisely since it is more robust to noisy or corrupted data; moreover, the local extrema of depths recover the fingertips of bending fingers well while traditional appearance-based approaches hardly work without matching models of hands. Experimental results show that our method captures the hand articulations more precisely compared with three state-of-the-art appearance-based approaches.

  19. Extracting hand articulations from monocular depth images using curvature scale space descriptors

    Institute of Scientific and Technical Information of China (English)

    Shao-fan WANG; Chun LI; De-hui KONG; Bao-cai YIN

    2016-01-01

    We propose a framework of hand articulation detection from a monocular depth image using curvature scale space (CSS) descriptors. We extract the hand contour from an input depth image, and obtain the fingertips and finger-valleys of the contour using the local extrema of a modified CSS map of the contour. Then we recover the undetected fingertips according to the local change of depths of points in the interior of the contour. Compared with traditional appearance-based approaches using either angle detectors or convex hull detectors, the modified CSS descriptor extracts the fingertips and finger-valleys more precisely since it is more robust to noisy or corrupted data;moreover, the local extrema of depths recover the fingertips of bending fingers well while traditional appearance-based approaches hardly work without matching models of hands. Experimental results show that our method captures the hand articulations more precisely compared with three state-of-the-art appearance-based approaches.

  20. Macroscopic three-dimensional particle location using stereoscopic imaging and astigmatic aberrations.

    Science.gov (United States)

    Fuchs, Thomas; Hain, Rainer; Kähler, Christian J

    2014-12-15

    This Letter presents a stereoscopic imaging concept for measuring the locations of particles in three-dimensional space. The method is derived from astigmatism particle tracking velocimetry (APTV), a powerful technique that is capable of determining 3D particle locations with a single camera. APTV locates particle xy coordinates with high accuracy, while the particle z coordinate has a larger location uncertainty. This is not a problem for 3D2C (i.e., three dimensions, two velocity components) measurements, but for highly three-dimensional flows, it is desirable to measure three velocity components with similar accuracy. The stereoscopic APTV approach discussed in this report has this capability. The technique employs APTV for giving an initial estimate of the particle locations. With this information, corresponding particle images on both sensors of the stereoscopic imaging system are matched. Particle locations are then determined by mapping the two particle image sensor locations to physical space. The measurement error of stereo APTV, determined by acquiring images of 1-μm DEHS particles in a 40 mm×40 mm×20 mm measurement volume in air at Δxyz→0 between two frames, is less than 0.012 mm for xy and 0.025 mm for z. This error analysis proves the excellent suitability of stereo APTV for the measurement of three-dimensional flows in macroscopic domains.

  1. Learning Receptive Fields and Quality Lookups for Blind Quality Assessment of Stereoscopic Images.

    Science.gov (United States)

    Shao, Feng; Lin, Weisi; Wang, Shanshan; Jiang, Gangyi; Yu, Mei; Dai, Qionghai

    2016-03-01

    Blind quality assessment of 3D images encounters more new challenges than its 2D counterparts. In this paper, we propose a blind quality assessment for stereoscopic images by learning the characteristics of receptive fields (RFs) from perspective of dictionary learning, and constructing quality lookups to replace human opinion scores without performance loss. The important feature of the proposed method is that we do not need a large set of samples of distorted stereoscopic images and the corresponding human opinion scores to learn a regression model. To be more specific, in the training phase, we learn local RFs (LRFs) and global RFs (GRFs) from the reference and distorted stereoscopic images, respectively, and construct their corresponding local quality lookups (LQLs) and global quality lookups (GQLs). In the testing phase, blind quality pooling can be easily achieved by searching optimal GRF and LRF indexes from the learnt LQLs and GQLs, and the quality score is obtained by combining the LRF and GRF indexes together. Experimental results on three publicly 3D image quality assessment databases demonstrate that in comparison with the existing methods, the devised algorithm achieves high consistent alignment with subjective assessment.

  2. On the accuracy of localization achievable in fiducial-based stereoscopic image registration system using an electronic portal imaging device.

    Science.gov (United States)

    Ung, N M; Wee, L

    2012-06-01

    Portal imaging using electronic portal imaging device (EPID) is a well-established image-guided radiation therapy (IGRT) technique for external beam radiation therapy. The aims of this study are threefold; (i) to assess the accuracy of isocentre localization in the fiducial-based stereoscopic image registration, (ii) to investigate the impact of errors in the beam collimation device on stereoscopic registration, and (iii) to evaluate the intra- and inter-observer variability in stereoscopic registration. Portal images of a ball bearing phantom were acquired and stereoscopic image registrations were performed based on a point centred in the ball bearing as the surrogate for registration. Experiments were replicated by applying intentional offsets in the beam collimation device to simulate collimation errors. The accuracy of fiducial markers localization was performed by repeating the experiment using three spherical lead shots implanted in a pelvic phantom. Portal images of pelvis phantom were given to four expert users to assess the inter-observer variability in performing registration. The isocentre localization accuracy tested using ball bearing phantom was within 0.3 mm. Gravity-induced systematic errors of beam collimation device by 2 mm resulted in positioning offsets of the order of 2 mm opposing the simulated errors. Relatively large inter-portal pair projection errors ranges from 1.3 mm to 1.8 mm were observed with simulated errors in the beam collimation device. The intra-user and inter-user variabilities were observed to be 0.8 and 0.4 mm respectively. Fiducial-based stereoscopic image registration using EPID is robust for IGRT procedure.

  3. Using Saliency-Weighted Disparity Statistics for Objective Visual Comfort Assessment of Stereoscopic Images

    Science.gov (United States)

    Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing

    2016-06-01

    Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.

  4. 3D Reconstruction from a Single Still Image Based on Monocular Vision of an Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available we propose a framework of combining Machine Learning with Dynamic Optimization for reconstructing scene in 3D automatically from a single still image of unstructured outdoor environment based on monocular vision of an uncalibrated camera. After segmenting image first time, a kind of searching tree strategy based on Bayes rule is used to identify the hierarchy of all areas on occlusion. After superpixel segmenting image second time, the AdaBoost algorithm is applied in the integration detection to the depth of lighting, texture and material. Finally, all the factors above are optimized with constrained conditions, acquiring the whole depthmap of an image. Integrate the source image with its depthmap in point-cloud or bilinear interpolation styles, realizing 3D reconstruction. Experiment in comparisons with typical methods in associated database demonstrates our method improves the reasonability of estimation to the overall 3D architecture of image’s scene to a certain extent. And it does not need any manual assist and any camera model information.

  5. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    Science.gov (United States)

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  6. Why is binocular rivalry uncommon? Discrepant monocular images in the real world

    Directory of Open Access Journals (Sweden)

    Derek Henry Arnold

    2011-10-01

    Full Text Available When different images project to corresponding points in the two eyes they can instigate a phenomenon called binocular rivalry (BR, wherein each image seems to intermittently disappear such that only one of the two images is seen at a time. Cautious readers may have noted an important caveat in the opening sentence – this situation can instigate BR, but usually it doesn’t. Unmatched monocular images are frequently encountered in daily life due to either differential occlusions of the two eyes or because of selective obstructions of just one eye, but this does not tend to induce BR. Here I will explore the reasons for this and discuss implications for BR in general. It will be argued that BR is resolved in favour of the instantaneously stronger neural signal, and that this process is driven by an adaptation that enhances the visibility of distant fixated objects over that of more proximate obstructions of an eye. Accordingly, BR would reflect the dynamics of an inherently visual operation that usually deals with real-world constraints.

  7. Stereoscopic Imaging in Hypersonics Boundary Layers using Planar Laser-Induced Fluorescence

    Science.gov (United States)

    Danehy, Paul M.; Bathel, Brett; Inman, Jennifer A.; Alderfer, David W.; Jones, Stephen B.

    2008-01-01

    Stereoscopic time-resolved visualization of three-dimensional structures in a hypersonic flow has been performed for the first time. Nitric Oxide (NO) was seeded into hypersonic boundary layer flows that were designed to transition from laminar to turbulent. A thick laser sheet illuminated and excited the NO, causing spatially-varying fluorescence. Two cameras in a stereoscopic configuration were used to image the fluorescence. The images were processed in a computer visualization environment to provide stereoscopic image pairs. Two methods were used to display these image pairs: a cross-eyed viewing method which can be viewed by naked eyes, and red/blue anaglyphs, which require viewing through red/blue glasses. The images visualized three-dimensional information that would be lost if conventional planar laser-induced fluorescence imaging had been used. Two model configurations were studied in NASA Langley Research Center's 31-Inch Mach 10 Air Wind tunnel. One model was a 10 degree half-angle wedge containing a small protuberance to force the flow to transition. The other model was a 1/3-scale, truncated Hyper-X forebody model with blowing through a series of holes to force the boundary layer flow to transition to turbulence. In the former case, low flowrates of pure NO seeded and marked the boundary layer fluid. In the latter, a trace concentration of NO was seeded into the injected N2 gas. The three-dimensional visualizations have an effective time resolution of about 500 ns, which is fast enough to freeze this hypersonic flow. The 512x512 resolution of the resulting images is much higher than high-speed laser-sheet scanning systems with similar time response, which typically measure 10-20 planes.

  8. Exploiting Depth From Single Monocular Images for Object Detection and Semantic Segmentation

    Science.gov (United States)

    Cao, Yuanzhouhan; Shen, Chunhua; Shen, Heng Tao

    2017-02-01

    Augmenting RGB data with measured depth has been shown to improve the performance of a range of tasks in computer vision including object detection and semantic segmentation. Although depth sensors such as the Microsoft Kinect have facilitated easy acquisition of such depth information, the vast majority of images used in vision tasks do not contain depth information. In this paper, we show that augmenting RGB images with estimated depth can also improve the accuracy of both object detection and semantic segmentation. Specifically, we first exploit the recent success of depth estimation from monocular images and learn a deep depth estimation model. Then we learn deep depth features from the estimated depth and combine with RGB features for object detection and semantic segmentation. Additionally, we propose an RGB-D semantic segmentation method which applies a multi-task training scheme: semantic label prediction and depth value regression. We test our methods on several datasets and demonstrate that incorporating information from estimated depth improves the performance of object detection and semantic segmentation remarkably.

  9. A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space

    Science.gov (United States)

    Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.

    2017-03-01

    Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction.

  10. A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space.

    Science.gov (United States)

    Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P

    2017-03-28

    Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO-GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction.

  11. Joint optic disc and cup boundary extraction from monocular fundus images.

    Science.gov (United States)

    Chakravarty, Arunava; Sivaswamy, Jayanthi

    2017-08-01

    Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Teaching Anatomy and Physiology Using Computer-Based, Stereoscopic Images

    Science.gov (United States)

    Perry, Jamie; Kuehn, David; Langlois, Rick

    2007-01-01

    Learning real three-dimensional (3D) anatomy for the first time can be challenging. Two-dimensional drawings and plastic models tend to over-simplify the complexity of anatomy. The approach described uses stereoscopy to create 3D images of the process of cadaver dissection and to demonstrate the underlying anatomy related to the speech mechanisms.…

  13. A cohesive modular system for real-time stereoscopic secure image processing and evaluation

    Science.gov (United States)

    Galli, Raffaello; Lazarus, Ed

    2007-02-01

    In this paper we define an innovative modular real-time system to visualize, capture, manage, securely preserve, store and playback stereoscopic images. The system, called "Solid-Look" together with the cameras "StereOpsis" will allow military, EOD specialists, and private industry operators to literally "see through the robot's eyes". The system enables the operator to control the robot as if his/her head were located on the robot itself, positioning and zooming the camera to the visual target object using the operator's eye and head movement, without any wearable devices and allowing the operator's hands to perform other tasks. The stereo cameras perform zooming and image stabilization for a controlled and smooth vision. The display enables stereoscopic vision without the need of glasses. Every image frame is authenticated, encrypted and timestamped to allow certainty and confidentiality during post-capture playback or to show evidence in court. The system secures the ability to operate it, requiring administrator's biometrical authentication. Solid-Look modular design can be used in multiple industries from Homeland Security to Pharmaceutical including research, forensic and underwater inspections and will certainly provide great benefit to the performance, speed and accuracy of the operations.

  14. A foreground object features-based stereoscopic image visual comfort assessment model

    Science.gov (United States)

    Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.

    2014-11-01

    Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.

  15. Real-time auto-stereoscopic visualization of 3D medical images

    Science.gov (United States)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  16. Blind Image Quality Assessment for Stereoscopic Images via Deep Learning%基于深度学习的无参考立体图像质量评价

    Institute of Scientific and Technical Information of China (English)

    田维军; 邵枫; 蒋刚毅; 郁梅

    2016-01-01

    立体图像质量评价是评价立体视频系统性能的有效途径,而如何结合人类的视觉特性对立体图像质量进行评价是目前的研究难点。为此提出一种基于深度学习的无参考立体图像质量评价方法,分为训练和测试2个阶段。在训练阶段,首先对左右图像分别进行Gabor滤波,获取不同尺度和方向的统计特征作为单目特性;然后根据人眼视觉系统的双目竞争特性,将左右图像融合得到独眼图,提取其方向梯度直方图作为双目特征;最后通过深度信念网络训练得到特征和主观评价值之间的回归模型。在测试阶段,根据已建立的回归模型,预测得到左右图像质量并联合得到立体图像质量。实验结果表明,文中方法在对称和非对称立体图像数据库都取得了较好的效果,与人类的主观感知保持良好的一致性。%Stereoscopic image quality assessment is an effective way to evaluate the performance of stereoscopic video systems, but how to utilize human visual characteristics effectively is still a research focus in stereoscopic image quality assessment. In this paper, a blind image quality assessment method for stereoscopic images is pro-posed via deep learning. The proposed method is composed of two stages: training and testing. In the training stage, Gabor filter is applied to the left and right distorted images respectively, and natural statistical features un-der different scales and directions are extracted to act as monocular features. Then, left and right images are fused to construct a cyclopean map, and histograms of oriented gradient features are extracted from the cyclopean map to act as binocular features. Finally, a regression model between features and subjective scores is established via deep belief network. In the testing stage, based on the established regression model, left and right image quality scores are predicted and fused to get the final

  17. Binocular and multi-view parallax images acquisition for three dimensional stereoscopic displays

    Science.gov (United States)

    Ge, Hongsheng; Sang, Xinzhu; Zhao, Tianqi; Yuan, Jinhui; Leng, Junmin; Zhang, Ying; Yan, Binbin

    2012-11-01

    It is important to acquire the proper parallax images for the stereoscopic display system. By setting the proper distance between the cameras and the location of the convergent point in this capturing configuration, the displayed 3D scene with the appropriate stereo depth and the expected effect in front of and behind the display screen can be obtained directly. The quantitative relationship between the parallax and the parameters of the capturing configuration with two cameras is presented. The capturing system with multiple cameras for acquiring equal parallaxes between the adjacent captured images for the autostereoscopic display system is also discussed. The proposed methods are demonstrated by the experimental results. The captured images with the calculated parameters for the 3D display system shows the expected results, which can provide the viewers the better immersion and visual comfort without any extra processing.

  18. Stereoscopic Planar Laser-Induced Fluorescence Imaging at 500 kHz

    Science.gov (United States)

    Medford, Taylor L.; Danehy, Paul M.; Jones, Stephen B.; Jiang, N.; Webster, M.; Lempert, Walter; Miller, J.; Meyer, T.

    2011-01-01

    A new measurement technique for obtaining time- and spatially-resolved image sequences in hypersonic flows is developed. Nitric-oxide planar laser-induced fluorescence (NO PLIF) has previously been used to investigate transition from laminar to turbulent flow in hypersonic boundary layers using both planar and volumetric imaging capabilities. Low flow rates of NO were typically seeded into the flow, minimally perturbing the flow. The volumetric imaging was performed at a measurement rate of 10 Hz using a thick planar laser sheet that excited NO fluorescence. The fluorescence was captured by a pair of cameras having slightly different views of the flow. Subsequent stereoscopic reconstruction of these images allowed the three-dimensional flow structures to be viewed. In the current paper, this approach has been extended to 50,000 times higher repetition rates. A laser operating at 500 kHz excites the seeded NO molecules, and a camera, synchronized with the laser and fitted with a beam-splitting assembly, acquires two separate images of the flow. The resulting stereoscopic images provide three-dimensional flow visualizations at 500 kHz for the first time. The 200 ns exposure time in each frame is fast enough to freeze the flow while the 500 kHz repetition rate is fast enough to time-resolve changes in the flow being studied. This method is applied to visualize the evolving hypersonic flow structures that propagate downstream of a discrete protuberance attached to a flat plate. The technique was demonstrated in the NASA Langley Research Center s 31-Inch Mach 10 Air Tunnel facility. Different tunnel Reynolds number conditions, NO flow rates and two different cylindrical protuberance heights were investigated. The location of the onset of flow unsteadiness, an indicator of transition, was observed to move downstream during the tunnel runs, coinciding with an increase in the model temperature.

  19. A method for converting three-dimensional models into auto-stereoscopic images based on integral photography

    Science.gov (United States)

    Katayama, Miwa; Iwadate, Yuichi

    2008-02-01

    We have been researching three-dimensional (3D) reconstruction from images captured by multiple cameras. Currently, we are investigating how to convert 3D models into stereoscopic images. We are interested in integral photography (IP), one of many stereoscopic display systems, because the IP display system reconstructs complete 3D auto-stereoscopic images in theory. This system consists of a high-resolution liquid-crystal panel and a lens array. It enables users to obtain a perspective view of 3D auto-stereoscopic images from any direction. We developed a method for converting 3D models into IP images using the OpenGL API. This method can be applied to normal CG objects because the 3D model is described in a CG format. In this paper, we outline our 3D modeling method and the performance of an IP display system. Then we discuss the method for converting 3D models into IP images and report experimental results.

  20. D(max) for stereoscopic depth perception with simulated monovision correction.

    Science.gov (United States)

    Qian, Jin; Adeseye, Samuel A; Stevenson, Scott B; Patel, Saumil S; Bedell, Harold E

    2012-01-01

    Persons who wear monovision correction typically receive a clear image in one eye and a blurred image in the other eye. Although monovision is known to elevate the minimum stereoscopic threshold (Dmin), it is uncertain how it influences the largest binocular disparity for which the direction of depth can reliably be perceived (Dmax). In this study, we compared Dmax for stereo when one eye's image is blurred to Dmax when both eyes' images are either clear or blurred. The stimulus was a pair of vertically oriented, random-line patterns. To simulate monovision correction with +1.5 or +2.5 D defocus, the images of the line patterns presented to one eye were spatially low-pass filtered while the patterns presented to the other eye remained unfiltered. Compared to binocular viewing without blur, Dmin is elevated substantially more in the presence of monocular than binocular simulated blur. Dmax is reduced in the presence of simulated monocular blur by between 13 and 44%, compared to when the images in both eyes are clear. In contrast, when the targets presented to both eyes are blurred equally, Dmax either is unchanged or increases slightly, compared to the values measured with no blur. In conjunction with the elevation of Dmin, the reduction of Dmax with monocular blur indicates that the range of useful stereoscopic depth perception is likely to be compressed in patients who wear monovision corrections.

  1. An interactive, stereoscopic virtual environment for medical imaging visualization, simulation and training

    Science.gov (United States)

    Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel

    2017-03-01

    Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic

  2. Stereoscopic uncooled thermal imaging with autostereoscopic 3D flat-screen display in military driving enhancement systems

    Science.gov (United States)

    Haan, H.; Münzberg, M.; Schwarzkopf, U.; de la Barré, R.; Jurk, S.; Duckstein, B.

    2012-06-01

    Thermal cameras are widely used in driver vision enhancement systems. However, in pathless terrain, driving becomes challenging without having a stereoscopic perception. Stereoscopic imaging is a well-known technique already for a long time with understood physical and physiological parameters. Recently, a commercial hype has been observed, especially in display techniques. The commercial market is already flooded with systems based on goggle-aided 3D-viewing techniques. However, their use is limited for military applications since goggles are not accepted by military users for several reasons. The proposed uncooled thermal imaging stereoscopic camera with a geometrical resolution of 640x480 pixel perfectly fits to the autostereoscopic display with a 1280x768 pixels. An eye tracker detects the position of the observer's eyes and computes the pixel positions for the left and the right eye. The pixels of the flat panel are located directly behind a slanted lenticular screen and the computed thermal images are projected into the left and the right eye of the observer. This allows a stereoscopic perception of the thermal image without any viewing aids. The complete system including camera and display is ruggedized. The paper discusses the interface and performance requirements for the thermal imager as well as for the display.

  3. Assessment of stereoscopic optic disc images using an autostereoscopic screen – experimental study

    Directory of Open Access Journals (Sweden)

    Vaideanu Daniella

    2008-07-01

    Full Text Available Abstract Background Stereoscopic assessment of the optic disc morphology is an important part of the care of patients with glaucoma. The aim of this study was to assess stereoviewing of stereoscopic optic disc images using an example of the new technology of autostereoscopic screens compared to the liquid shutter goggles. Methods Independent assessment of glaucomatous disc characteristics and measurement of optic disc and cup parameters whilst using either an autostereoscopic screen or liquid crystal shutter goggles synchronized with a view switching display. The main outcome measures were inter-modality agreements between the two used modalities as evaluated by the weighted kappa test and Bland Altman plots. Results Inter-modality agreement for measuring optic disc parameters was good [Average kappa coefficient for vertical Cup/Disc ratio was 0.78 (95% CI 0.62–0.91 and 0.81 (95% CI 0.6–0.92 for observer 1 and 2 respectively]. Agreement between modalities for assessing optic disc characteristics for glaucoma on a five-point scale was very good with a kappa value of 0.97. Conclusion This study compared two different methods of stereo viewing. The results of assessment of the different optic disc and cup parameters were comparable using an example of the newly developing autostereoscopic display technologies as compared to the shutter goggles system used. The Inter-modality agreement was high. This new technology carries potential clinical usability benefits in different areas of ophthalmic practice.

  4. Extending the Life of Virtual Heritage: Reuse of Tls Point Clouds in Synthetic Stereoscopic Spherical Images

    Science.gov (United States)

    Garcia Fernandez, J.; Tammi, K.; Joutsiniemi, A.

    2017-02-01

    Recent advances in Terrestrial Laser Scanner (TLS), in terms of cost and flexibility, have consolidated this technology as an essential tool for the documentation and digitalization of Cultural Heritage. However, once the TLS data is used, it basically remains stored and left to waste.How can highly accurate and dense point clouds (of the built heritage) be processed for its reuse, especially to engage a broader audience? This paper aims to answer this question by a channel that minimizes the need for expert knowledge, while enhancing the interactivity with the as-built digital data: Virtual Heritage Dissemination through the production of VR content. Driven by the ProDigiOUs project's guidelines on data dissemination (EU funded), this paper advances in a production path to transform the point cloud into virtual stereoscopic spherical images, taking into account the different visual features that produce depth perception, and especially those prompting visual fatigue while experiencing the VR content. Finally, we present the results of the Hiedanranta's scans transformed into stereoscopic spherical animations.

  5. 立体图象对视差性能分析%Performance Analysis of Disparity for Stereoscopic Image Pairs

    Institute of Scientific and Technical Information of China (English)

    安平; 张兆扬

    2001-01-01

    Disparity is the geometrical difference between images of a stereoscopic pair. In this paper, we give a comprehensive analysis of the statistical characteristics of disparity. Based on experiments, we discuss the relations between disparity, depth and object, relation between block size and disparity estimation, and the influence of error criteria on disparity estimation.

  6. 立体图象对视差性能分析%Performance Analysis of Disparity for Stereoscopic Image Pairs

    Institute of Scientific and Technical Information of China (English)

    安平; 张兆扬

    2000-01-01

    Disparity is the geometrical difference between images of a stereoscopic pair. In this paper, we give a comprehensive analysis of the statistical characteristics of disparity. Based on experiments, we discuss the relations between disparity, depth and object, relation between block size and disparity estimation, and the influence of error criteria on disparity estimation.

  7. 3D pressure imaging of an aircraft propeller blade-tip flow by phase-locked stereoscopic PIV

    NARCIS (Netherlands)

    Ragni, D.; Van Oudheusden, B.W.; Scarano, F.

    2011-01-01

    The flow field at the tip region of a scaled DHC Beaver aircraft propeller, running at transonic speed, has been investigated by means of a multi-plane stereoscopic particle image velocimetry setup. Velocity fields, phase-locked with the blade rotational motion, are acquired across several planes pe

  8. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  9. Tissue feature-based intra-fractional motion tracking for stereoscopic x-ray image guided radiotherapy.

    Science.gov (United States)

    Xie, Yaoqin; Xing, Lei; Gu, Jia; Liu, Wu

    2013-06-07

    Real-time knowledge of tumor position during radiation therapy is essential to overcome the adverse effect of intra-fractional organ motion. The goal of this work is to develop a tumor tracking strategy by effectively utilizing the inherent image features of stereoscopic x-ray images acquired during dose delivery. In stereoscopic x-ray image guided radiation delivery, two orthogonal x-ray images are acquired either simultaneously or sequentially. The essence of markerless tumor tracking is the reliable identification of inherent points with distinct tissue features on each projection image and their association between two images. The identification of the feature points on a planar x-ray image is realized by searching for points with high intensity gradient. The feature points are associated by using the scale invariance features transform descriptor. The performance of the proposed technique is evaluated by using images of a motion phantom and four archived clinical cases acquired using either a CyberKnife equipped with a stereoscopic x-ray imaging system, or a LINAC equipped with an onboard kV imager and an electronic portal imaging device. In the phantom study, the results obtained using the proposed method agree with the measurements to within 2 mm in all three directions. In the clinical study, the mean error is 0.48 ± 0.46 mm for four patient data with 144 sequential images. In this work, a tissue feature-based tracking method for stereoscopic x-ray image guided radiation therapy is developed. The technique avoids the invasive procedure of fiducial implantation and may greatly facilitate the clinical workflow.

  10. Objective quality assessment of stereoscopic images with vertical disparity using EEG

    Science.gov (United States)

    Shahbazi Avarvand, Forooz; Bosse, Sebastian; Müller, Klaus-Robert; Schäfer, Ralf; Nolte, Guido; Wiegand, Thomas; Curio, Gabriel; Samek, Wojciech

    2017-08-01

    Objective. Neurophysiological correlates of vertical disparity in 3D images are studied in an objective approach using EEG technique. These disparities are known to negatively affect the quality of experience and to cause visual discomfort in stereoscopic visualizations. Approach. We have presented four conditions to subjects: one in 2D and three conditions in 3D, one without vertical disparity and two with different vertical disparity levels. Event related potentials (ERPs) are measured for each condition and the differences between ERP components are studied. Analysis is also performed on the induced potentials in the time frequency domain. Main results. Results show that there is a significant increase in the amplitude of P1 components in 3D conditions in comparison to 2D. These results are consistent with previous studies which have shown that P1 amplitude increases due to the depth perception in 3D compared to 2D. However the amplitude is significantly smaller for maximum vertical disparity (3D-3) in comparison to 3D with no vertical disparity. Our results therefore suggest that the vertical disparity in 3D-3 condition decreases the perception of depth compared to other 3D conditions and the amplitude of P1 component can be used as a discriminative feature. Significance. The results show that the P1 component increases in amplitude due to the depth perception in the 3D stimuli compared to the 2D stimulus. On the other hand the vertical disparity in the stereoscopic images is studied here. We suggest that the amplitude of P1 component is modulated with this parameter and decreases due to the decrease in the perception of depth.

  11. Objective quality assessment of stereoscopic images with vertical disparity using EEG.

    Science.gov (United States)

    Avarvand, Forooz Shahbazi; Bosse, Sebastian; Müller, Klaus-Robert; Schäfer, Ralf; Nolte, Guido; Wiegand, Thomas; Curio, Gabriel; Samek, Wojciech

    2017-05-25

    Neurophysiological correlates of vertical disparity in 3D images are studied in an objective approach using EEG technique. These disparities are known to negatively affect the quality of experience and to cause visual discomfort in stereoscopic visualizations. We have presented four conditions to subjects: one in 2D and three conditions in 3D, one without vertical disparity and two with different vertical disparity levels. Event related potentials (ERPs) are measured for each condition and the differences between ERP components are studied. Analysis is also performed on the induced potentials in the time frequency domain. Results show that there is a significant increase in the amplitude of P1 components in 3D conditions in comparison to 2D. These results are consistent with previous studies which have shown that P1 amplitude increases due to the depth perception in 3D compared to 2D. However the amplitude is significantly smaller for maximum vertical disparity (3D-3) in comparison to 3D with no vertical disparity. Our results therefore suggest that the vertical disparity in 3D-3 condition decreases the perception of depth compared to other 3D conditions and the amplitude of P1 component can be used as a discriminative feature. The results show that the P1 component increases in amplitude due to the depth perception in the 3D stimuli compared to the 2D stimulus. On the other hand the vertical disparity in the stereoscopic images is studied here. We suggest that the amplitude of P1 component is modulated with this parameter and decreases due to the decrease in the perception of depth.

  12. Quality Index for Stereoscopic Images by Separately Evaluating Adding and Subtracting.

    Directory of Open Access Journals (Sweden)

    Jiachen Yang

    Full Text Available The human visual system (HVS plays an important role in stereo image quality perception. Therefore, it has aroused many people's interest in how to take advantage of the knowledge of the visual perception in image quality assessment models. This paper proposes a full-reference metric for quality assessment of stereoscopic images based on the binocular difference channel and binocular summation channel. For a stereo pair, the binocular summation map and binocular difference map are computed first by adding and subtracting the left image and right image. Then the binocular summation is decoupled into two parts, namely additive impairments and detail losses. The quality of binocular summation is obtained as the adaptive combination of the quality of detail losses and additive impairments. The quality of binocular summation is computed by using the Contrast Sensitivity Function (CSF and weighted multi-scale (MS-SSIM. Finally, the quality of binocular summation and binocular difference is integrated into an overall quality index. The experimental results indicate that compared with existing metrics, the proposed metric is highly consistent with the subjective quality assessment and is a robust measure. The result have also indirectly proved hypothesis of the existence of binocular summation and binocular difference channels.

  13. Binocular depth acuity research to support the modular multi-spectral stereoscopic night vision goggle

    Science.gov (United States)

    Merritt, John O.; CuQlock-Knopp, V. Grayson; Paicopolis, Peter; Smoot, Jennifer; Kregel, Mark; Corona, Bernard

    2006-05-01

    This paper discusses the depth acuity research conducted in support of the development of a Modular Multi-Spectral Stereoscopic (M2S2) night vision goggle (NVG), a customizable goggle that lets the user select one of five goggle configurations: monocular thermal, monocular image intensifier (I2), binocular I2, binocular thermal, and binocular dual-waveband (thermal imagery to one eye and I2 imagery to the other eye). The motives for the development of this type of customizable goggle were (1) the need for an NVG that allows the simultaneous use of two wavebands, (2) the need for an alternative sensor fusion method to avoid the potential image degradation that may accompany digitally fused images, (3) a requirement to provide the observer with stereoscopic, dual spectrum views of a scene, and (4) the need to handle individual user preferences for sensor types and ocular configurations employed in various military operations. Among the increases in functionality that the user will have with this system is the ability to convert from a binocular I2 device (needed for detailed terrain analysis during off-road mobility) to a monocular thermal device (for increased situational awareness in the unaided eye during nights with full moon illumination). Results of the present research revealed potential depth acuity advantages that may apply to off-road terrain hazard detection for the binocular thermal configuration. The results also indicated that additional studies are needed to address ways to minimize binocular incompatibility for the dual waveband configuration.

  14. Recent variation of the Las Vacas Glacier Mt. Aconcagua region, Central Andes, Argentina, based on ASTER stereoscopic images

    Science.gov (United States)

    Lenzano, M. G.; Leiva, J. C.; Lenzano, L.

    2010-01-01

    This work presents the results of the ASTER stereoscopic image processing to calculate the volume changes of Las Vacas Glacier. The processing of medium resolution satellite images (ASTER level 1A - pixel 15 m) from February 2001 and 2007 was performed applying the satellite digital photogrammetry method (Kääb, 2005). The comparison of the two generated DTM returns results that are acceptable within the parameters and precisions that can be obtained with this kind of sensor and the processing methodology.

  15. Saliency Detection of Stereoscopic 3D Images with Application to Visual Discomfort Prediction

    Science.gov (United States)

    Li, Hong; Luo, Ting; Xu, Haiyong

    2017-06-01

    Visual saliency detection is potentially useful for a wide range of applications in image processing and computer vision fields. This paper proposes a novel bottom-up saliency detection approach for stereoscopic 3D (S3D) images based on regional covariance matrix. As for S3D saliency detection, besides the traditional 2D low-level visual features, additional 3D depth features should also be considered. However, only limited efforts have been made to investigate how different features (e.g. 2D and 3D features) contribute to the overall saliency of S3D images. The main contribution of this paper is that we introduce a nonlinear feature integration descriptor, i.e., regional covariance matrix, to fuse both 2D and 3D features for S3D saliency detection. The regional covariance matrix is shown to be effective for nonlinear feature integration by modelling the inter-correlation of different feature dimensions. Experimental results demonstrate that the proposed approach outperforms several existing relevant models including 2D extended and pure 3D saliency models. In addition, we also experimentally verified that the proposed S3D saliency map can significantly improve the prediction accuracy of experienced visual discomfort when viewing S3D images.

  16. Monocular feature tracker for low-cost stereo vision control of an autonomous guided vehicle (AGV)

    Science.gov (United States)

    Pearson, Chris M.; Probert, Penelope J.

    1994-02-01

    We describe a monocular feature tracker (MFT), the first stage of a low cost stereoscopic vision system for use on an autonomous guided vehicle (AGV) in an indoor environment. The system does not require artificial markings or other beacons, but relies upon accurate knowledge of the AGV motion. Linear array cameras (LAC) are used to reduce the data and processing bandwidths. The limited information given by LAC require modelling of the expected features. We model an obstacle as a vertical line segment touching the floor, and can distinguish between these obstacles and most other clutter in an image sequence. Detection of these obstacles is sufficient information for local AGV navigation.

  17. Stereoscopic camera design

    Science.gov (United States)

    Montgomery, David J.; Jones, Christopher K.; Stewart, James N.; Smith, Alan

    2002-05-01

    It is clear from the literature that the majority of work in stereoscopic imaging is directed towards the development of modern stereoscopic displays. As costs come down, wider public interest in this technology is expected to increase. This new technology would require new methods of image formation. Advances in stereo computer graphics will of course lead to the creation of new stereo computer games, graphics in films etc. However, the consumer would also like to see real-world stereoscopic images, pictures of family, holiday snaps etc. Such scenery would have wide ranges of depth to accommodate and would need also to cope with moving objects, such as cars, and in particular other people. Thus, the consumer acceptance of auto/stereoscopic displays and 3D in general would be greatly enhanced by the existence of a quality stereoscopic camera. This paper will cover an analysis of existing stereoscopic camera designs and show that they can be categorized into four different types, with inherent advantages and disadvantages. A recommendation is then made with regard to 3D consumer still and video photography. The paper will go on to discuss this recommendation and describe its advantages and how it can be realized in practice.

  18. Induction of Monocular Stereopsis by Altering Focus Distance: A Test of Ames’s Hypothesis

    Directory of Open Access Journals (Sweden)

    Dhanraj Vishwanath

    2016-04-01

    Full Text Available Viewing a real three-dimensional scene or a stereoscopic image with both eyes generates a vivid phenomenal impression of depth known as stereopsis. Numerous reports have highlighted the fact that an impression of stereopsis can be induced in the absence of binocular disparity. A method claimed by Ames (1925 involved altering accommodative (focus distance while monocularly viewing a picture. This claim was tested on naïve observers using a method inspired by the observations of Gogel and Ogle on the equidistance tendency. Consistent with Ames’s claim, most observers reported that the focus manipulation induced an impression of stereopsis comparable to that obtained by monocular-aperture viewing.

  19. Concept of an autostereoscopic system containing 29 million of stereoscopic image pairs

    Science.gov (United States)

    Grasnick, Armin

    2015-02-01

    The number of perspective views limits the viewing zone of a passive, untracked autostereoscopic display. To enhance the freedom of movement in front of the 3D display, the number of views has to increase as well. An improvement of the viewing zone caused by the raising view numbers will result in lower resolution of each single perspective. A few companies have showed 3D displays with more than 8 or 9 views (including Sunny Ocean Studios 64 view display). The number of effective orthoscopic stereo image pairs is a triangular number on the base of the perspective views n. Using a stereoscopic glass (with only 2 views), the triangular number nΔ is also 2. But in a 5 view display (i.e. techXpert 3D display), nΔ=10. In a theoretical case, each vertical line of a display, represented by a sub-pixel, could consist a single view. On a real display with 7.680 sub pixel columns, the resulting triangular number is more than 29 million. The display system guides more than one view in the pupil of the observer's eye. This superposition principle of views leads to a reduction of channel separation and an increase of cross talk. It will be examined if a multitude of very low-resolution images with a high crosstalk could reproduce a satisfying 3D image.

  20. Differential processing of binocular and monocular gloss cues in human visual cortex

    Science.gov (United States)

    Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.

    2016-01-01

    The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596

  1. Differential processing of binocular and monocular gloss cues in human visual cortex.

    Science.gov (United States)

    Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E

    2016-06-01

    The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.

  2. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    Science.gov (United States)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  3. A METHOD FOR RECORDING AND VIEWING STEREOSCOPIC IMAGES IN COLOUR USING MULTICHROME FILTERS

    DEFF Research Database (Denmark)

    2000-01-01

    differences prescribed by the stereoscopic principle and supplementing the colour perception. For selecting the filters, the invention suggests an auxiliary test. For encoding the stereograms, the invention suggests a special process of channel separation and replacement. For colour correction...

  4. Partially converted stereoscopic images and the effects on visual attention and memory

    Science.gov (United States)

    Kim, Sanghyun; Morikawa, Hiroyuki; Mitsuya, Reiko; Kawai, Takashi; Watanabe, Katsumi

    2015-03-01

    This study contained two experimental examinations of the cognitive activities such as visual attention and memory in viewing stereoscopic (3D) images. For this study, partially converted 3D images were used with binocular parallax added to a specific region of the image. In Experiment 1, change blindness was used as a presented stimulus. The visual attention and impact on memory were investigated by measuring the response time to accomplish the given task. In the change blindness task, an 80 ms blank was intersected between the original and altered images, and the two images were presented alternatingly for 240 ms each. Subjects were asked to temporarily memorize the two switching images and to compare them, visually recognizing the difference between the two. The stimuli for four conditions (2D, 3D, Partially converted 3D, distracted partially converted 3D) were randomly displayed for 20 subjects. The results of Experiment 1 showed that partially converted 3D images tend to attract visual attention and are prone to remain in viewer's memory in the area where moderate negative parallax has been added. In order to examine the impact of a dynamic binocular disparity on partially converted 3D images, an evaluation experiment was conducted that applied learning, distraction, and recognition tasks for 33 subjects. The learning task involved memorizing the location of cells in a 5 × 5 matrix pattern using two different colors. Two cells were positioned with alternating colors, and one of the gray cells was moved up, down, left, or right by one cell width. Experimental conditions was set as a partially converted 3D condition in which a gray cell moved diagonally for a certain period of time with a dynamic binocular disparity added, a 3D condition in which binocular disparity was added to all gray cells, and a 2D condition. The correct response rates for recognition of each task after the distraction task were compared. The results of Experiment 2 showed that the correct

  5. Unsteady characteristics of near-wall turbulence using high repetition stereoscopic particle image velocimetry (PIV)

    Science.gov (United States)

    Foucaut, J. M.; Coudert, S.; Stanislas, M.

    2009-07-01

    This study is part of a project that is aimed at building dynamic boundary conditions near a solid wall, in order to reduce the large eddy simulation spatial resolution that is necessary in this region. The objective is to build a low-order dynamical system in a plane parallel to the wall, which will mimic the unsteady behaviour of turbulence. This dynamical system should be derived from a POD decomposition of the velocity field. The POD decomposition is to be applied on an experimental database of time-resolved velocity fields. In order to obtain the experimental database, a specific experiment of high-speed stereoscopic particle image velocimetry (PIV) has been performed. This experiment was carried out in the turbulent boundary layer of the LML wind tunnel. The plane under study was parallel to the wall located at 100 wall units. This database is validated via comparison with hot-wire anemometry (HWA). Despite some peak locking observed on the streamwise velocity component, the PDF and the power spectra are in very good agreement with the HWA results. The two-point spatial correlations are also in good agreement with the results from the literature. As the flow is time-resolved, space-time correlations are also computed. The convection of the flow structure is observed to be the most important effect at this wall distance. The next step is to compute the dynamical system and to couple it to a large eddy simulation.

  6. Diagnosing perceptual distortion present in group stereoscopic viewing

    Science.gov (United States)

    Burton, Melissa; Pollock, Brice; Kelly, Jonathan W.; Gilbert, Stephen; Winer, Eliot; de la Cruz, Julio

    2012-03-01

    Stereoscopic displays are an increasingly prevalent tool for experiencing virtual environments, and the inclusion of stereo has the potential to improve distance perception within the virtual environment. When multiple users simultaneously view the same stereoscopic display, only one user experiences the projectively correct view of the virtual environment, and all other users view the same stereoscopic images while standing at locations displaced from the center of projection (CoP). This study was designed to evaluate the perceptual distortions caused by displacement from the CoP when viewing virtual objects in the context of a virtual scene containing stereo depth cues. Judgments of angles were distorted after leftward and rightward displacement from the CoP. Judgments of object depth were distorted after forward and backward displacement from the CoP. However, perceptual distortions of angle and depth were smaller than predicted by a ray-intersection model based on stereo viewing geometry. Furthermore, perceptual distortions were asymmetric, leading to different patterns of distortion depending on the direction of displacement. This asymmetry also conflicts with the predictions of the ray-intersection model. The presence of monocular depth cues might account for departures from model predictions.

  7. A Two-Stage Bayesian Network Method for 3D Human Pose Estimation from Monocular Image Sequences

    Directory of Open Access Journals (Sweden)

    Wang Yuan-Kai

    2010-01-01

    Full Text Available Abstract This paper proposes a novel human motion capture method that locates human body joint position and reconstructs the human pose in 3D space from monocular images. We propose a two-stage framework including 2D and 3D probabilistic graphical models which can solve the occlusion problem for the estimation of human joint positions. The 2D and 3D models adopt directed acyclic structure to avoid error propagation of inference. Image observations corresponding to shape and appearance features of humans are considered as evidence for the inference of 2D joint positions in the 2D model. Both the 2D and 3D models utilize the Expectation Maximization algorithm to learn prior distributions of the models. An annealed Gibbs sampling method is proposed for the two-stage method to inference the maximum posteriori distributions of joint positions. The annealing process can efficiently explore the mode of distributions and find solutions in high-dimensional space. Experiments are conducted on the HumanEva dataset with image sequences of walking motion, which has challenges of occlusion and loss of image observations. Experimental results show that the proposed two-stage approach can efficiently estimate more accurate human poses.

  8. Surface area and volume measurements of volcanic ash particles by SEM stereoscopic imaging

    Science.gov (United States)

    Ersoy, Orkun

    2010-05-01

    Surface area of volcanic ash particles is of great importance to research including plume dynamics, particle chemical and water reactions in the plume, modelling (i.e. plume shape, particle interactions , dispersion etc.), remote sensing of transport and SO2, HCl, H2O, CO2 levels, forecasting plume location, and transportation and deposition of ash particles. The implemented method presented in this study offer new insights for surface characterization of volcanic ash particles on macro-pore regions. Surface area and volumes of volcanic ash particles were measured using digital elevation models (DEM) reconstructed from stereoscopic images acquired from different angles by scanning electron microscope (SEM). The method was tested using glycidyl methacrylate (GMA) micro-spheres which exhibit low spherical imperfections. The differences between measured and geometrically calculated surface areas were introduced for both micro-spheres and volcanic ash particles in order to highlight the probable errors in modelling on volcanic ash behaviour. The specific surface areas of volcanic ash particles using this method are reduced by half (from mean values of 0.045 m2/g to 0.021 m2/g) for the size increment 63 μm to 125 μm. Ash particles mostly have higher specific surface area values than the geometric forms irrespective of particle size. The specific surface area trends of spheres and ash particles resemble for finer particles (63 μm). Approximation to sphere and ellipsoid have similar margin of error for coarser particles (125 μm) but both seem to be inadequate for representation of real ash surfaces.

  9. Stereoscopic particle image velocimetry analysis of healthy and emphysemic alveolar sac models.

    Science.gov (United States)

    Berg, Emily J; Robinson, Risa J

    2011-06-01

    Emphysema is a progressive lung disease that involves permanent destruction of the alveolar walls. Fluid mechanics in the pulmonary region and how they are altered with the presence of emphysema are not well understood. Much of our understanding of the flow fields occurring in the healthy pulmonary region is based on idealized geometries, and little attention has been paid to emphysemic geometries. The goal of this research was to utilize actual replica lung geometries to gain a better understanding of the mechanisms that govern fluid motion and particle transport in the most distal regions of the lung and to compare the differences that exist between healthy and emphysematous lungs. Excised human healthy and emphysemic lungs were cast, scanned, graphically reconstructed, and used to fabricate clear, hollow, compliant models. Three dimensional flow fields were obtained experimentally using stereoscopic particle image velocimetry techniques for healthy and emphysematic breathing conditions. Measured alveolar velocities ranged over two orders of magnitude from the duct entrance to the wall in both models. Recirculating flow was not found in either the healthy or the emphysematic model, while the average flow rate was three times larger in emphysema as compared to healthy. Diffusion dominated particle flow, which is characteristic in the pulmonary region of the healthy lung, was not seen for emphysema, except for very small particle sizes. Flow speeds dissipated quickly in the healthy lung (60% reduction in 0.25 mm) but not in the emphysematic lung (only 8% reduction 0.25 mm). Alveolar ventilation per unit volume was 30% smaller in emphysema compared to healthy. Destruction of the alveolar walls in emphysema leads to significant differences in flow fields between the healthy and emphysemic lung. Models based on replica geometry provide a useful means to quantify these differences and could ultimately improve our understanding of disease progression.

  10. Monocular accommodation condition in 3D display types through geometrical optics

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong-Wook; Park, Min-Chul; Son, Jung-Young

    2007-09-01

    Eye fatigue or strain phenomenon in 3D display environment is a significant problem for 3D display commercialization. The 3D display systems like eyeglasses type stereoscopic or auto-stereoscopic multiview, Super Multi-View (SMV), and Multi-Focus (MF) displays are considered for detail calculation about satisfaction level of monocular accommodation by geometrical optics calculation means. A lens with fixed focal length is used for experimental verification about numerical calculation of monocular defocus effect caused by accommodation at three different depths. And the simulation and experiment results consistently show relatively high level satisfaction about monocular accommodation at MF display condition. Additionally, possibility of monocular depth perception, 3D effect, at monocular MF display is discussed.

  11. The perceived visual direction of monocular objects in random-dot stereograms is influenced by perceived depth and allelotropia.

    Science.gov (United States)

    Hariharan-Vilupuru, Srividhya; Bedell, Harold E

    2009-01-01

    The proposed influence of objects that are visible to both eyes on the perceived direction of an object that is seen by only one eye is known as the "capture of binocular visual direction". The purpose of this study was to evaluate whether stereoscopic depth perception is necessary for the "capture of binocular visual direction" to occur. In one pair of experiments, perceived alignment between two nearby monocular lines changed systematically with the magnitude and direction of horizontal but not vertical disparity. In four of the five observers, the effect of horizontal disparity on perceived alignment depended on which eye viewed the monocular lines. In additional experiments, the perceived alignment between the monocular lines changed systematically with the magnitude and direction of both horizontal and vertical disparities when the monocular line separation was increased from 1.1 degrees to 3.3 degrees . These results indicate that binocular capture depends on the perceived depth that results from horizontal retinal image disparity as well as allelotropia, or the averaging of local-sign information. Our data suggest that, during averaging, different weights are afforded to the local-sign information in the two eyes, depending on whether the separation between binocularly viewed targets is horizontal or vertical.

  12. Stereoscopic Configurations To Minimize Distortions

    Science.gov (United States)

    Diner, Daniel B.

    1991-01-01

    Proposed television system provides two stereoscopic displays. Two-camera, two-monitor system used in various camera configurations and with stereoscopic images on monitors magnified to various degrees. Designed to satisfy observer's need to perceive spatial relationships accurately throughout workspace or to perceive them at high resolution in small region of workspace. Potential applications include industrial, medical, and entertainment imaging and monitoring and control of telemanipulators, telerobots, and remotely piloted vehicles.

  13. M pathway and areas 44 and 45 are involved in stereoscopic recognition based on binocular disparity.

    Science.gov (United States)

    Negawa, Tsuneo; Mizuno, Shinji; Hahashi, Tomoya; Kuwata, Hiromi; Tomida, Mihoko; Hoshi, Hiroaki; Era, Seiichi; Kuwata, Kazuo

    2002-04-01

    We characterized the visual pathways involved in the stereoscopic recognition of the random dot stereogram based on the binocular disparity employing a functional magnetic resonance imaging (fMRI). The V2, V3, V4, V5, intraparietal sulcus (IPS) and the superior temporal sulcus (STS) were significantly activated during the binocular stereopsis, but the inferotemporal gyrus (ITG) was not activated. Thus a human M pathway may be part of a network involved in the stereoscopic processing based on the binocular disparity. It is intriguing that areas 44 (Broca's area) and 45 in the left hemisphere were also active during the binocular stereopsis. However, it was reported that these regions were inactive during the monocular stereopsis. To separate the specific responses directly caused by the stereoscopic recognition process from the nonspecific ones caused by the memory load or the intention, we designed a novel frequency labeled tasks (FLT) sequence. The functional MRI using the FLT indicated that the activation of areas 44 and 45 is correlated with the stereoscopic recognition based on the binocular disparity but not with the intention artifacts, suggesting that areas 44 and 45 play an essential role in the binocular disparity.

  14. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience

    Science.gov (United States)

    Hanhart, Philippe; Ebrahimi, Touradj

    2014-03-01

    Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.

  15. Monocular depth effects on perceptual fading.

    Science.gov (United States)

    Hsu, Li-Chuan; Kramer, Peter; Yeh, Su-Ling

    2010-08-06

    After prolonged viewing, a static target among moving non-targets is perceived to repeatedly disappear and reappear. An uncrossed stereoscopic disparity of the target facilitates this Motion-Induced Blindness (MIB). Here we test whether monocular depth cues can affect MIB too, and whether they can also affect perceptual fading in static displays. Experiment 1 reveals an effect of interposition: more MIB when the target appears partially covered by, than when it appears to cover, its surroundings. Experiment 2 shows that the effect is indeed due to interposition and not to the target's contours. Experiment 3 induces depth with the watercolor illusion and replicates Experiment 1. Experiments 4 and 5 replicate Experiments 1 and 3 without the use of motion. Since almost any stimulus contains a monocular depth cue, we conclude that perceived depth affects perceptual fading in almost any stimulus, whether dynamic or static. Copyright 2010 Elsevier Ltd. All rights reserved.

  16. A stereoscopic lens for digital cinema cameras

    Science.gov (United States)

    Lipton, Lenny; Rupkalvis, John

    2015-03-01

    Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.

  17. 单视角立体成像技术研究%The Research of Stereoscopic Imaging Technique Based on Single View

    Institute of Scientific and Technical Information of China (English)

    苏云; 马永利; 阮宁娟

    2011-01-01

    立体成像是获取目标三维轮廓信息的有效方法。目前,在航天航空领域立体成像方法主要包括:基于立体视觉原理的三线阵成像方法和主动式激光测距立体成像方法。这两种成像方式由于对平台要求较高或者无法得到目标的灰度图像而存在相应的缺陷。文章研究了一种新的单视角立体成像技术。通过对该技术的研究认为,单视角立体成像技术能够实现单台相机单次成像获取目标三维轮廓信息。%Stereoscopic imaging is an effective method to obtain the three-dimension profile information of the target. Currently, there are two stereoscopic imaging methods in the field of aerospace: three-line array imaging based on the principle of stereoscopic vision and active laser ranging imaging. Both of the two methods have defects, either higher platform condition requirement or grayscale images not be obtained. In this paper, a new stereoscopic imaging technique has been proposed. According to the study of this new technique, three-dimensional profile information of the target can be obtained through the single camera imaging one time based on the technique of single view stereoscopic imaging.

  18. Stereoscopic Three-Dimensional Images of an Anatomical Dissection of the Eyeball and Orbit for Educational Purposes

    Directory of Open Access Journals (Sweden)

    Matsuo,Toshihiko

    2013-04-01

    Full Text Available The purpose of this study was to develop a series of stereoscopic anatomical images of the eye and orbit for use in the curricula of medical schools and residency programs in ophthalmology and other specialties. Layer-by-layer dissection of the eyelid, eyeball, and orbit of a cadaver was performed by an ophthalmologist. A stereoscopic camera system was used to capture a series of anatomical views that were scanned in a panoramic three-dimensional manner around the center of the lid fissure. The images could be rotated 360 degrees in the frontal plane and the angle of views could be tilted up to 90 degrees along the anteroposterior axis perpendicular to the frontal plane around the 360 degrees. The skin, orbicularis oculi muscle, and upper and lower tarsus were sequentially observed. The upper and lower eyelids were removed to expose the bulbar conjunctiva and to insert three 25-gauge trocars for vitrectomy at the location of the pars plana. The cornea was cut at the limbus, and the lens with mature cataract was dislocated. The sclera was cut to observe the trocars from inside the eyeball. The sclera was further cut to visualize the superior oblique muscle with the trochlea and the inferior oblique muscle. The eyeball was dissected completely to observe the optic nerve and the ophthalmic artery. The thin bones of the medial and inferior orbital wall were cracked with a forceps to expose the ethmoid and maxillary sinus, respectively. In conclusion, the serial dissection images visualized aspects of the local anatomy specific to various procedures, including the levator muscle and tarsus for blepharoptosis surgery, 25-gauge trocars as viewed from inside the eye globe for vitrectomy, the oblique muscles for strabismus surgery, and the thin medial and inferior orbital bony walls for orbital bone fractures.

  19. Lesion detectability in stereoscopically viewed digital breast tomosynthesis projection images: a model observer study with anthropomorphic computational breast phantoms

    Science.gov (United States)

    Reinhold, Jacob; Wen, Gezheng; Lo, Joseph Y.; Markey, Mia K.

    2017-03-01

    Stereoscopic views of 3D breast imaging data may better reveal the 3D structures of breasts, and potentially improve the detection of breast lesions. The imaging geometry of digital breast tomosynthesis (DBT) lends itself naturally to stereo viewing because a stereo pair can be easily formed by two projection images with a reasonable separation angle for perceiving depth. This simulation study attempts to mimic breast lesion detection on stereo viewing of a sequence of stereo pairs of DBT projection images. 3D anthropomorphic computational breast phantoms were scanned by a simulated DBT system, and spherical signals were inserted into different breast regions to imitate the presence of breast lesions. The regions of interest (ROI) had different local anatomical structures and consequently different background statistics. The projection images were combined into a sequence of stereo pairs, and then presented to a stereo matching model observer for determining lesion presence. The signal-to-noise ratio (SNR) was used as the figure of merit in evaluation, and the SNR from the stack of reconstructed slices was considered as the benchmark. We have shown that: 1) incorporating local anatomical backgrounds may improve lesion detectability relative to ignoring location-dependent image characteristics. The SNR was lower for the ROIs with the higher local power-law-noise coefficient β. 2) Lesion detectability may be inferior on stereo viewing of projection images relative to conventional viewing of reconstructed slices, but further studies are needed to confirm this observation.

  20. Geometric and Reflectance Signature Characterization of Complex Canopies Using Hyperspectral Stereoscopic Images from Uav and Terrestrial Platforms

    Science.gov (United States)

    Honkavaara, E.; Hakala, T.; Nevalainen, O.; Viljanen, N.; Rosnell, T.; Khoramshahi, E.; Näsi, R.; Oliveira, R.; Tommaselli, A.

    2016-06-01

    Light-weight hyperspectral frame cameras represent novel developments in remote sensing technology. With frame camera technology, when capturing images with stereoscopic overlaps, it is possible to derive 3D hyperspectral reflectance information and 3D geometric data of targets of interest, which enables detailed geometric and radiometric characterization of the object. These technologies are expected to provide efficient tools in various environmental remote sensing applications, such as canopy classification, canopy stress analysis, precision agriculture, and urban material classification. Furthermore, these data sets enable advanced quantitative, physical based retrieval of biophysical and biochemical parameters by model inversion technologies. Objective of this investigation was to study the aspects of capturing hyperspectral reflectance data from unmanned airborne vehicle (UAV) and terrestrial platform with novel hyperspectral frame cameras in complex, forested environment.

  1. GEOMETRIC AND REFLECTANCE SIGNATURE CHARACTERIZATION OF COMPLEX CANOPIES USING HYPERSPECTRAL STEREOSCOPIC IMAGES FROM UAV AND TERRESTRIAL PLATFORMS

    Directory of Open Access Journals (Sweden)

    E. Honkavaara

    2016-06-01

    Full Text Available Light-weight hyperspectral frame cameras represent novel developments in remote sensing technology. With frame camera technology, when capturing images with stereoscopic overlaps, it is possible to derive 3D hyperspectral reflectance information and 3D geometric data of targets of interest, which enables detailed geometric and radiometric characterization of the object. These technologies are expected to provide efficient tools in various environmental remote sensing applications, such as canopy classification, canopy stress analysis, precision agriculture, and urban material classification. Furthermore, these data sets enable advanced quantitative, physical based retrieval of biophysical and biochemical parameters by model inversion technologies. Objective of this investigation was to study the aspects of capturing hyperspectral reflectance data from unmanned airborne vehicle (UAV and terrestrial platform with novel hyperspectral frame cameras in complex, forested environment.

  2. Robust 3D Object Tracking from Monocular Images using Stable Parts.

    Science.gov (United States)

    Crivellaro, Alberto; Rad, Mahdi; Verdie, Yannick; Yi, Kwang Moo; Fua, Pascal; Lepetit, Vincent

    2017-05-26

    We present an algorithm for estimating the pose of a rigid object in real-time under challenging conditions. Our method effectively handles poorly textured objects in cluttered, changing environments, even when their appearance is corrupted by large occlusions, and it relies on grayscale images to handle metallic environments on which depth cameras would fail. As a result, our method is suitable for practical Augmented Reality applications including industrial environments. At the core of our approach is a novel representation for the 3D pose of object parts: We predict the 3D pose of each part in the form of the 2D projections of a few control points. The advantages of this representation is three-fold: We can predict the 3D pose of the object even when only one part is visible; when several parts are visible, we can easily combine them to compute a better pose of the object; the 3D pose we obtain is usually very accurate, even when only few parts are visible. We show how to use this representation in a robust 3D tracking framework. In addition to extensive comparisons with the state-of-the-art, we demonstrate our method on a practical Augmented Reality application for maintenance assistance in the ATLAS particle detector at CERN.

  3. 立体图像质量的主观感知评价研究%Subjective Perception Assessment of Stereoscopic Image Quality:Study

    Institute of Scientific and Technical Information of China (English)

    朱江英; 郁梅; 陈芬; 李福翠

    2015-01-01

    Stereoscopic image quality assessment (IQA) includes subjective and objective evaluation methods. In this paper, the subjective stereoscopic IQA methods are mainly dealt with. Several mainstream subjective evaluation methods are discussed, in which the psychophysical characteristics of human visual system are considered. The influences of various kinds of distortions on perceived quality of stereoscopic images are also analyzed. Finally, the outlook of subjective stereoscopic IQA is envisioned.%立体图像质量评价方法可分为主观质量评价方法和客观质量评价方法两大类,文中重点论述了立体图像的主观质量评价方法,介绍了针对人眼视觉系统的心理物理学特性而设计的主观感知评价方法,并分析了不同类型的失真对立体感知的影响,最后对立体图像主观质量评价技术的发展进行了展望。

  4. Amodal completion with background determines depth from monocular gap stereopsis.

    Science.gov (United States)

    Grove, Philip M; Ben Sachtler, W L; Gillam, Barbara J

    2006-10-01

    Grove, Gillam, and Ono [Grove, P. M., Gillam, B. J., & Ono, H. (2002). Content and context of monocular regions determine perceived depth in random dot, unpaired background and phantom stereograms. Vision Research, 42, 1859-1870] reported that perceived depth in monocular gap stereograms [Gillam, B. J., Blackburn, S., & Nakayama, K. (1999). Stereopsis based on monocular gaps: Metrical encoding of depth and slant without matching contours. Vision Research, 39, 493-502] was attenuated when the color/texture in the monocular gap did not match the background. It appears that continuation of the gap with the background constitutes an important component of the stimulus conditions that allow a monocular gap in an otherwise binocular surface to be responded to as a depth step. In this report we tested this view using the conventional monocular gap stimulus of two identical grey rectangles separated by a gap in one eye but abutting to form a solid grey rectangle in the other. We compared depth seen at the gap for this stimulus with stimuli that were identical except for two additional small black squares placed at the ends of the gap. If the squares were placed stereoscopically behind the rectangle/gap configuration (appearing on the background) they interfered with the perceived depth at the gap. However when they were placed in front of the configuration this attenuation disappeared. The gap and the background were able under these conditions to complete amodally.

  5. Extended two-photon microscopy in live samples with Bessel beams: steadier focus, faster volume scans, and simpler stereoscopic imaging

    Science.gov (United States)

    Thériault, Gabrielle; Cottet, Martin; Castonguay, Annie; McCarthy, Nathalie; De Koninck, Yves

    2014-01-01

    Two-photon microscopy has revolutionized functional cellular imaging in tissue, but although the highly confined depth of field (DOF) of standard set-ups yields great optical sectioning, it also limits imaging speed in volume samples and ease of use. For this reason, we recently presented a simple and retrofittable modification to the two-photon laser-scanning microscope which extends the DOF through the use of an axicon (conical lens). Here we demonstrate three significant benefits of this technique using biological samples commonly employed in the field of neuroscience. First, we use a sample of neurons grown in culture and move it along the z-axis, showing that a more stable focus is achieved without compromise on transverse resolution. Second, we monitor 3D population dynamics in an acute slice of live mouse cortex, demonstrating that faster volumetric scans can be conducted. Third, we acquire a stereoscopic image of neurons and their dendrites in a fixed sample of mouse cortex, using only two scans instead of the complete stack and calculations required by standard systems. Taken together, these advantages, combined with the ease of integration into pre-existing systems, make the extended depth-of-field imaging based on Bessel beams a strong asset for the field of microscopy and life sciences in general. PMID:24904284

  6. Monocular visual ranging

    Science.gov (United States)

    Witus, Gary; Hunt, Shawn

    2008-04-01

    The vision system of a mobile robot for checkpoint and perimeter security inspection performs multiple functions: providing surveillance video, providing high resolution still images, and providing video for semi-autonomous visual navigation. Mid-priced commercial digital cameras support the primary inspection functions. Semi-autonomous visual navigation is a tertiary function whose purpose is to reduce the burden of teleoperation and free the security personnel for their primary functions. Approaches to robot visual navigation require some form of depth perception for speed control to prevent the robot from colliding with objects. In this paper present the initial results of an exploration of the capabilities and limitations of using a single monocular commercial digital camera for depth perception. Our approach combines complementary methods in alternating stationary and moving behaviors. When the platform is stationary, it computes a range image from differential blur in the image stack collected at multiple focus settings. When the robot is moving, it extracts an estimate of range from the camera auto-focus function, and combines this with an estimate derived from angular expansion of a constellation of visual tracking points.

  7. Monocular occlusions determine the perceived shape and depth of occluding surfaces.

    Science.gov (United States)

    Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S

    2010-06-01

    Recent experiments have established that monocular areas arising due to occlusion of one object by another contribute to stereoscopic depth perception. It has been suggested that the primary role of monocular occlusions is to define depth discontinuities and object boundaries in depth. Here we use a carefully designed stimulus to demonstrate empirically that monocular occlusions play an important role in localizing depth edges and defining the shape of the occluding surfaces in depth. We show that the depth perceived via occlusion in our stimuli is not due to the presence of binocular disparity at the boundary and discuss the quantitative nature of depth perception in our stimuli. Our data suggest that the visual system can use monocular information to estimate not only the sign of the depth of the occluding surface but also its magnitude. We also provide preliminary evidence that perceived depth of illusory occluders derived from monocular information can be biased by binocular features.

  8. Stereoscopic advantages for vection induced by radial, circular, and spiral optic flows.

    Science.gov (United States)

    Palmisano, Stephen; Summersby, Stephanie; Davies, Rodney G; Kim, Juno

    2016-11-01

    Although observer motions project different patterns of optic flow to our left and right eyes, there has been surprisingly little research into potential stereoscopic contributions to self-motion perception. This study investigated whether visually induced illusory self-motion (i.e., vection) is influenced by the addition of consistent stereoscopic information to radial, circular, and spiral (i.e., combined radial + circular) patterns of optic flow. Stereoscopic vection advantages were found for radial and spiral (but not circular) flows when monocular motion signals were strong. Under these conditions, stereoscopic benefits were greater for spiral flow than for radial flow. These effects can be explained by differences in the motion aftereffects generated by these displays, which suggest that the circular motion component in spiral flow selectively reduced adaptation to stereoscopic motion-in-depth. Stereoscopic vection advantages were not observed for circular flow when monocular motion signals were strong, but emerged when monocular motion signals were weakened. These findings show that stereoscopic information can contribute to visual self-motion perception in multiple ways.

  9. Binocular coordination: reading stereoscopic sentences in depth.

    Directory of Open Access Journals (Sweden)

    Elizabeth R Schotter

    Full Text Available The present study employs a stereoscopic manipulation to present sentences in three dimensions to subjects as they read for comprehension. Subjects read sentences with (a no depth cues, (b a monocular depth cue that implied the sentence loomed out of the screen (i.e., increasing retinal size, (c congruent monocular and binocular (retinal disparity depth cues (i.e., both implied the sentence loomed out of the screen and (d incongruent monocular and binocular depth cues (i.e., the monocular cue implied the sentence loomed out of the screen and the binocular cue implied it receded behind the screen. Reading efficiency was mostly unaffected, suggesting that reading in three dimensions is similar to reading in two dimensions. Importantly, fixation disparity was driven by retinal disparity; fixations were significantly more crossed as readers progressed through the sentence in the congruent condition and significantly more uncrossed in the incongruent condition. We conclude that disparity depth cues are used on-line to drive binocular coordination during reading.

  10. Binocular coordination: reading stereoscopic sentences in depth.

    Science.gov (United States)

    Schotter, Elizabeth R; Blythe, Hazel I; Kirkby, Julie A; Rayner, Keith; Holliman, Nicolas S; Liversedge, Simon P

    2012-01-01

    The present study employs a stereoscopic manipulation to present sentences in three dimensions to subjects as they read for comprehension. Subjects read sentences with (a) no depth cues, (b) a monocular depth cue that implied the sentence loomed out of the screen (i.e., increasing retinal size), (c) congruent monocular and binocular (retinal disparity) depth cues (i.e., both implied the sentence loomed out of the screen) and (d) incongruent monocular and binocular depth cues (i.e., the monocular cue implied the sentence loomed out of the screen and the binocular cue implied it receded behind the screen). Reading efficiency was mostly unaffected, suggesting that reading in three dimensions is similar to reading in two dimensions. Importantly, fixation disparity was driven by retinal disparity; fixations were significantly more crossed as readers progressed through the sentence in the congruent condition and significantly more uncrossed in the incongruent condition. We conclude that disparity depth cues are used on-line to drive binocular coordination during reading.

  11. Stereoscopic contents authoring system for 3D DMB data service

    Science.gov (United States)

    Lee, BongHo; Yun, Kugjin; Hur, Namho; Kim, Jinwoong; Lee, SooIn

    2009-02-01

    This paper presents a stereoscopic contents authoring system that covers the creation and editing of stereoscopic multimedia contents for the 3D DMB (Digital Multimedia Broadcasting) data services. The main concept of 3D DMB data service is that, instead of full 3D video, partial stereoscopic objects (stereoscopic JPEG, PNG and MNG) are stereoscopically displayed on the 2D background video plane. In order to provide stereoscopic objects, we design and implement a 3D DMB content authoring system which provides the convenient and straightforward contents creation and editing functionalities. For the creation of stereoscopic contents, we mainly focused on two methods: CG (Computer Graphics) based creation and real image based creation. In the CG based creation scenario where the generated CG data from the conventional MAYA or 3DS MAX tool is rendered to generate the stereoscopic images by applying the suitable disparity and camera parameters, we use X-file for the direct conversion to stereoscopic objects, so called 3D DMB objects. In the case of real image based creation, the chroma-key method is applied to real video sequences to acquire the alpha-mapped images which are in turn directly converted to stereoscopic objects. The stereoscopic content editing module includes the timeline editor for both the stereoscopic video and stereoscopic objects. For the verification of created stereoscopic contents, we implemented the content verification module to verify and modify the contents by adjusting the disparity. The proposed system will leverage the power of stereoscopic contents creation for mobile 3D data service especially targeted for T-DMB with the capabilities of CG and real image based contents creation, timeline editing and content verification.

  12. 3D pressure imaging of an aircraft propeller blade-tip flow by phase-locked stereoscopic PIV

    Energy Technology Data Exchange (ETDEWEB)

    Ragni, D.; Oudheusden, B.W. van; Scarano, F. [Delft University of Technology, Faculty of Aerospace Engineering, Delft (Netherlands)

    2012-02-15

    The flow field at the tip region of a scaled DHC Beaver aircraft propeller, running at transonic speed, has been investigated by means of a multi-plane stereoscopic particle image velocimetry setup. Velocity fields, phase-locked with the blade rotational motion, are acquired across several planes perpendicular to the blade axis and merged to form a 3D measurement volume. Transonic conditions have been reached at the tip region, with a revolution frequency of 19,800 rpm and a relative free-stream Mach number of 0.73 at the tip. The pressure field and the surface pressure distribution are inferred from the 3D velocity data through integration of the momentum Navier-Stokes equation in differential form, allowing for the simultaneous flow visualization and the aerodynamic loads computation, with respect to a reference frame moving with the blade. The momentum and pressure data are further integrated by means of a contour-approach to yield the aerodynamic sectional force components as well as the blade torsional moment. A steady Reynolds averaged Navier-Stokes numerical simulation of the entire propeller model has been used for comparison to the measurement data. (orig.)

  13. Experimental insights into flow impingement in cerebral aneurysm by stereoscopic particle image velocimetry: transition from a laminar regime.

    Science.gov (United States)

    Yagi, Takanobu; Sato, Ayaka; Shinke, Manabu; Takahashi, Sara; Tobe, Yasutaka; Takao, Hiroyuki; Murayama, Yuichi; Umezu, Mitsuo

    2013-05-01

    This study experimentally investigated the instability of flow impingement in a cerebral aneurysm, which was speculated to promote the degradation of aneurysmal wall. A patient-specific, full-scale and elastic-wall replica of cerebral artery was fabricated from transparent silicone rubber. The geometry of the aneurysm corresponded to that found at 9 days before rupture. The flow in a replica was analysed by quantitative flow visualization (stereoscopic particle image velocimetry) in a three-dimensional, high-resolution and time-resolved manner. The mid-systolic and late-diastolic flows with a Reynolds number of 450 and 230 were compared. The temporal and spatial variations of near-wall velocity at flow impingement delineated its inherent instability at a low Reynolds number. Wall shear stress (WSS) at that site exhibited a combination of temporal fluctuation and spatial divergence. The frequency range of fluctuation was found to exceed significantly that of the heart rate. The high-frequency-fluctuating WSS appeared only during mid-systole and disappeared during late diastole. These results suggested that the flow impingement induced a transition from a laminar regime. This study demonstrated that the hydrodynamic instability of shear layer could not be neglected even at a low Reynolds number. No assumption was found to justify treating the aneurysmal haemodynamics as a fully viscous laminar flow.

  14. Stereoscopic particle image velocimetry measurements of the three-dimensional flow field of a descending autorotating mahogany seed (Swietenia macrophylla).

    Science.gov (United States)

    Salcedo, E; Treviño, C; Vargas, R O; Martínez-Suástegui, L

    2013-06-01

    An experimental investigation of near field aerodynamics of wind dispersed rotary seeds has been performed using stereoscopic digital particle image velocimetry (DPIV). The detailed three-dimensional flow structure of the leading-edge vortex (LEV) of autorotating mahogany seeds (Swietenia macrophylla) in a low-speed vertical wind tunnel is revealed for the first time. The results confirm that the presence of strong spanwise flow and strain produced by centrifugal forces through a spiral vortex are responsible for the attachment and stability of the LEV, with its core forming a cone pattern with a gradual increase in vortex size. The LEV appears at 25% of the wingspan, increases in size and strength outboard along the wing, and reaches its maximum stability and spanwise velocity at 75% of the wingspan. At a region between 90 and 100% of the wingspan, the strength and stability of the vortex core decreases and the LEV re-orientation/inflection with the tip vortex takes place. In this study, the instantaneous flow structure and the instantaneous velocity and vorticity fields measured in planes parallel to the free stream direction are presented as contour plots using an inertial and a non-inertial frame of reference. Results for the mean aerodynamic thrust coefficients as a function of the Reynolds number are presented to supplement the DPIV data.

  15. Augmented reality to the rescue of the minimally invasive surgeon. The usefulness of the interposition of stereoscopic images in the Da Vinci™ robotic console.

    Science.gov (United States)

    Volonté, Francesco; Buchs, Nicolas C; Pugin, François; Spaltenstein, Joël; Schiltz, Boris; Jung, Minoa; Hagen, Monika; Ratib, Osman; Morel, Philippe

    2013-09-01

    Computerized management of medical information and 3D imaging has become the norm in everyday medical practice. Surgeons exploit these emerging technologies and bring information previously confined to the radiology rooms into the operating theatre. The paper reports the authors' experience with integrated stereoscopic 3D-rendered images in the da Vinci surgeon console. Volume-rendered images were obtained from a standard computed tomography dataset using the OsiriX DICOM workstation. A custom OsiriX plugin was created that permitted the 3D-rendered images to be displayed in the da Vinci surgeon console and to appear stereoscopic. These rendered images were displayed in the robotic console using the TilePro multi-input display. The upper part of the screen shows the real endoscopic surgical field and the bottom shows the stereoscopic 3D-rendered images. These are controlled by a 3D joystick installed on the console, and are updated in real time. Five patients underwent a robotic augmented reality-enhanced procedure. The surgeon was able to switch between the classical endoscopic view and a combined virtual view during the procedure. Subjectively, the addition of the rendered images was considered to be an undeniable help during the dissection phase. With the rapid evolution of robotics, computer-aided surgery is receiving increasing interest. This paper details the authors' experience with 3D-rendered images projected inside the surgical console. The use of this intra-operative mixed reality technology is considered very useful by the surgeon. It has been shown that the usefulness of this technique is a step toward computer-aided surgery that will progress very quickly over the next few years. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Costless Platform for High Resolution Stereoscopic Images of a High Gothic Facade

    Science.gov (United States)

    Héno, R.; Chandelier, L.; Schelstraete, D.

    2012-07-01

    In October 2011, the PPMD specialized master's degree students (Photogrammetry, Positionning and Deformation Measurement) of the French ENSG (IGN's School of Geomatics, the Ecole Nationale des Sciences Géographiques) were asked to come and survey the main facade of the cathedral of Amiens, which is very complex as far as size and decoration are concerned. Although it was first planned to use a lift truck for the image survey, budget considerations and taste for experimentation led the project to other perspectives: images shot from the ground level with a long focal camera will be combined to complementary images shot from what higher galleries are available on the main facade with a wide angle camera fixed on a horizontal 2.5 meter long pole. This heteroclite image survey is being processed by the PPMD master's degree students during this academic year. Among other type of products, 3D point clouds will be calculated on specific parts of the facade with both sources of images. If the proposed device and methodology to get full image coverage of the main facade happen to be fruitful, the image acquisition phase will be completed later by another team. This article focuses on the production of 3D point clouds with wide angle images on the rose of the main facade.

  17. A METHOD FOR RECORDING AND VIEWING STEREOSCOPIC IMAGES IN COLOUR USING MULTICHROME FILTERS

    DEFF Research Database (Denmark)

    2000-01-01

    in a conventional stereogram recorded of the scene. The invention makes use of a colour-based encoding technique and viewing filters selected so that the human observer receives, in one eye, an image of nearly full colour information, in the other eye, an essentially monochrome image supplying the parallactic...

  18. A study to evaluate the reliability of using two-dimensional photographs, three-dimensional images, and stereoscopic projected three-dimensional images for patient assessment.

    Science.gov (United States)

    Zhu, S; Yang, Y; Khambay, B

    2017-03-01

    Clinicians are accustomed to viewing conventional two-dimensional (2D) photographs and assume that viewing three-dimensional (3D) images is similar. Facial images captured in 3D are not viewed in true 3D; this may alter clinical judgement. The aim of this study was to evaluate the reliability of using conventional photographs, 3D images, and stereoscopic projected 3D images to rate the severity of the deformity in pre-surgical class III patients. Forty adult patients were recruited. Eight raters assessed facial height, symmetry, and profile using the three different viewing media and a 100-mm visual analogue scale (VAS), and appraised the most informative viewing medium. Inter-rater consistency was above good for all three media. Intra-rater reliability was not significantly different for rating facial height using 2D (P=0.704), symmetry using 3D (P=0.056), and profile using projected 3D (P=0.749). Using projected 3D for rating profile and symmetry resulted in significantly lower median VAS scores than either 3D or 2D images (all Pstereoscopic 3D projection was the preferred method for rating. The reliability of assessing specific characteristics was dependent on the viewing medium. Clinicians should be aware that the visual information provided when viewing 3D images is not the same as when viewing 2D photographs, especially for facial depth, and this may change the clinical impression.

  19. Stereoscopic Projection of 35mm Slides.

    Science.gov (United States)

    Carey, Edward F.

    1978-01-01

    Describes ways of projecting stereoscopic images of geologic environments for students with difficulty reasoning in three-dimensions. The photographic procedures needed to produce stereo slides are included. (MA)

  20. SHAPE AND ALBEDO FROM SHADING (SAfS FOR PIXEL-LEVEL DEM GENERATION FROM MONOCULAR IMAGES CONSTRAINED BY LOW-RESOLUTION DEM

    Directory of Open Access Journals (Sweden)

    B. Wu

    2016-06-01

    Full Text Available Lunar topographic information, e.g., lunar DEM (Digital Elevation Model, is very important for lunar exploration missions and scientific research. Lunar DEMs are typically generated from photogrammetric image processing or laser altimetry, of which photogrammetric methods require multiple stereo images of an area. DEMs generated from these methods are usually achieved by various interpolation techniques, leading to interpolation artifacts in the resulting DEM. On the other hand, photometric shape reconstruction, e.g., SfS (Shape from Shading, extensively studied in the field of Computer Vision has been introduced to pixel-level resolution DEM refinement. SfS methods have the ability to reconstruct pixel-wise terrain details that explain a given image of the terrain. If the terrain and its corresponding pixel-wise albedo were to be estimated simultaneously, this is a SAfS (Shape and Albedo from Shading problem and it will be under-determined without additional information. Previous works show strong statistical regularities in albedo of natural objects, and this is even more logically valid in the case of lunar surface due to its lower surface albedo complexity than the Earth. In this paper we suggest a method that refines a lower-resolution DEM to pixel-level resolution given a monocular image of the coverage with known light source, at the same time we also estimate the corresponding pixel-wise albedo map. We regulate the behaviour of albedo and shape such that the optimized terrain and albedo are the likely solutions that explain the corresponding image. The parameters in the approach are optimized through a kernel-based relaxation framework to gain computational advantages. In this research we experimentally employ the Lunar-Lambertian model for reflectance modelling; the framework of the algorithm is expected to be independent of a specific reflectance model. Experiments are carried out using the monocular images from Lunar Reconnaissance

  1. Shape and Albedo from Shading (SAfS) for Pixel-Level dem Generation from Monocular Images Constrained by Low-Resolution dem

    Science.gov (United States)

    Wu, Bo; Chung Liu, Wai; Grumpe, Arne; Wöhler, Christian

    2016-06-01

    Lunar topographic information, e.g., lunar DEM (Digital Elevation Model), is very important for lunar exploration missions and scientific research. Lunar DEMs are typically generated from photogrammetric image processing or laser altimetry, of which photogrammetric methods require multiple stereo images of an area. DEMs generated from these methods are usually achieved by various interpolation techniques, leading to interpolation artifacts in the resulting DEM. On the other hand, photometric shape reconstruction, e.g., SfS (Shape from Shading), extensively studied in the field of Computer Vision has been introduced to pixel-level resolution DEM refinement. SfS methods have the ability to reconstruct pixel-wise terrain details that explain a given image of the terrain. If the terrain and its corresponding pixel-wise albedo were to be estimated simultaneously, this is a SAfS (Shape and Albedo from Shading) problem and it will be under-determined without additional information. Previous works show strong statistical regularities in albedo of natural objects, and this is even more logically valid in the case of lunar surface due to its lower surface albedo complexity than the Earth. In this paper we suggest a method that refines a lower-resolution DEM to pixel-level resolution given a monocular image of the coverage with known light source, at the same time we also estimate the corresponding pixel-wise albedo map. We regulate the behaviour of albedo and shape such that the optimized terrain and albedo are the likely solutions that explain the corresponding image. The parameters in the approach are optimized through a kernel-based relaxation framework to gain computational advantages. In this research we experimentally employ the Lunar-Lambertian model for reflectance modelling; the framework of the algorithm is expected to be independent of a specific reflectance model. Experiments are carried out using the monocular images from Lunar Reconnaissance Orbiter (LRO

  2. Study of asthenopia caused by the viewing of stereoscopic images: measurement by MEG and other devices

    Science.gov (United States)

    Hagura, Hiroyuki; Nakajima, Masayuki

    2006-02-01

    Three-Dimensional (hereafter, 3D) imaging is one of the very powerful tools to help the people to understand the spatial relationship of objects. Various glassless 3D imaging technologies for 3D TV, personal computers, PDA and cellular phones have been developed. These devices are often viewed for long periods. Most of the people who watch 3D images for a long time, experience asthenopia or eye fatigue. This concerns a preliminary study that attempted to find the basic cause of the problem by using MEG and the other devices. Plans call for further neurophysiological study on this subject. The purpose of my study is to design a standard or guidelines for shooting, image processing, and displaying 3D images to create the suitable images with higher quality and less or no asthenopia. Although it is difficult to completely avoid asthenopia when viewing 3D images, it would be useful if guidelines for the production of such images could be established that reduced its severity. The final goal of my research is to formulate such guidelines with an objective basis derived from measurement results from MEG and other devices. In addition to the study I was in charge of the work to install the world largest glasses-free 3D display to Japan Pavilion Nagakute in the 2005 World Exposition, Aichi, Japan during March, 25th to September 25th, 2005. And several types of large screen for 3D movies were available for testing, the result of the test to this report are added.

  3. Monocular transparency generates quantitative depth.

    Science.gov (United States)

    Howard, Ian P; Duke, Philip A

    2003-11-01

    Monocular zones adjacent to depth steps can create an impression of depth in the absence of binocular disparity. However, the magnitude of depth is not specified. We designed a stereogram that provides information about depth magnitude but which has no disparity. The effect depends on transparency rather than occlusion. For most subjects, depth magnitude produced by monocular transparency was similar to that created by a disparity-defined depth probe. Addition of disparity to monocular transparency did not improve the accuracy of depth settings. The magnitude of depth created by monocular occlusion fell short of that created by monocular transparency.

  4. Crosstalk in stereoscopic displays: a review

    Science.gov (United States)

    Woods, Andrew J.

    2012-10-01

    Crosstalk, also known as ghosting or leakage, is a primary factor in determining the image quality of stereoscopic three dimensional (3D) displays. In a stereoscopic display, a separate perspective view is presented to each of the observer's two eyes in order to experience a 3D image with depth sensation. When crosstalk is present in a stereoscopic display, each eye will see a combination of the image intended for that eye, and some of the image intended for the other eye-making the image look doubled or ghosted. High levels of crosstalk can make stereoscopic images hard to fuse and lack fidelity, so it is important to achieve low levels of crosstalk in the development of high-quality stereoscopic displays. Descriptive and mathematical definitions of these terms are formalized and summarized. The mechanisms by which crosstalk occurs in different stereoscopic display technologies are also reviewed, including micropol 3D liquid crystal displays (LCDs), autostereoscopic (lenticular and parallax barrier), polarized projection, anaglyph, and time-sequential 3D on LCDs, plasma display panels and cathode ray tubes. Crosstalk reduction and crosstalk cancellation are also discussed along with methods of measuring and simulating crosstalk.

  5. Roughness preserving filter design to remove spatial noise from stereoscopic skin images for stable haptic rendering.

    Science.gov (United States)

    Lee, K; Kim, M; Lee, O; Kim, K

    2017-08-01

    A problem in skin rendering with haptic feedback is the reconstruction of accurate 3D skin surfaces from stereo skin images to be used for touch interactions. This problem also encompasses the issue of how to accurately remove haptic spatial noise caused by the construction of disparity maps from stereo skin images, while minimizing the loss of the original skin roughness for cloning real tough textures without errors. Since the haptic device is very sensitive to high frequencies, even small amounts of noise can cause serious system errors including mechanical oscillations and unexpected exerting forces. Therefore, there is a need to develop a noise removal algorithm that preserves haptic roughness. A new algorithm for a roughness preserving filter (RPF) that adaptively removes spatial noise, is proposed. The algorithm uses the disparity control parameter (λ) and noise control parameter (k), obtained from singular value decomposition of a disparity map. The parameter k determines the amount of noise to be removed, and the optimum value of k is automatically chosen based on a threshold of gradient angles of roughness (Ra ). The RPF algorithm was implemented and verified with three real skin images. Evaluation criteria include preserved roughness quality and removed noise. Mean squared error (MSE), peak signal to noise ratio (PSNR), and objective roughness measures Ra and Rq were used for evaluation, and the results were compared against a median filter. The results show that the proposed RPF algorithm is a promising technology for removing noise and retaining maximized roughness, which guarantees stable haptic rendering for skin roughness. The proposed RPF is a promising technology because it allows for any stereo image to be filtered without the risk of losing the original roughness. In addition, the algorithm runs automatically for any given stereo skin image with relation to the disparity parameter λ, and the roughness parameters Ra or Rq are given priority

  6. A Study on Stereoscopic X-ray Imaging Data Set on the Accuracy of Real-Time Tumor Tracking in External Beam Radiotherapy.

    Science.gov (United States)

    Esmaili Torshabi, Ahmad; Ghorbanzadeh, Leila

    2017-04-01

    At external beam radiotherapy, stereoscopic X-ray imaging system is responsible as tumor motion information provider. This system takes X-ray images intermittently from tumor position (1) at pretreatment step to provide training data set for model construction and (2) during treatment to control the accuracy of correlation model performance. In this work, we investigated the effect of imaging data points provided by this system on treatment quality. Because some information is still lacking about (1) the number of imaging data points, (2) shooting time for capturing each data point, and also (3) additional imaging dose delivered by this system. These 3 issues were comprehensively assessed at (1) pretreatment step while training data set is gathered for prediction model construction and (2) during treatment while model is tested and reconstructed using new arrival data points. A group of real patients treated with CyberKnife Synchrony module was chosen in this work, and an adaptive neuro-fuzzy inference system was considered as consistent correlation model. Results show that a proper model can be constructed while the number of imaging data points is highly enough to represent a good pattern of breathing cycles. Moreover, a trade-off between the number of imaging data points and additional imaging dose is considered in this study. Since breathing phenomena are highly variable at different patients, the time for taking some of imaging data points is very important, while their absence at that critical time may yield wrong tumor tracking. In contrast, the sensitivity of another category of imaging data points is not high, while breathing is normal and in the control range. Therefore, an adaptive supervision on the implementation of stereoscopic X-ray imaging is proposed to intelligently accomplish shooting process, based on breathing motion variations.

  7. Visual Fatigue and Mitigation for Stereoscopic Image Observation%观察立体影像时的视觉疲劳及其缓解措施

    Institute of Scientific and Technical Information of China (English)

    王飞; 王晨升; 刘晓杰

    2011-01-01

    There are a number of problems in the observation of stereoscopic images created by using parallax, in which visual fatigue is a most critical one. By means of exploring the causes and symptoms of visual fatigue in parallax-based stereoscopic image observation, the influencing mechanism of inconsistent visual convergence and focus adjustment on visual fatigue are discussed. Based on this, four strategies to mitigate the visual fatigue for parallax-based stereoscopic images observation are proposed, namely, by reducing the eye burden, by limiting the inconsistency of focus adjustment and convergence to a reasonable bound, by avoiding excessive parallax and its discontinuous variation, and by utilizing lens correction.%利用两眼视差原理获得立体视的技术存在着诸多问题,其中视觉疲劳是其最为突出的.论文分析了两眼视差立体视产生视觉疲劳的原因和症状;讨论了辐辏和焦点调节不一致的问题;最后提出了不要让眼睛机能负担过度、将焦点调节和辐辏的不一致限制在一个合理的范围内、避免过大的视差和视差的不连续变化、利用补正透镜减轻视觉负担等四种缓解视觉疲劳的措施.

  8. Evaluating methods for controlling depth perception in stereoscopic cinematography.

    OpenAIRE

    Sun, G.; Holliman, N. S.

    2009-01-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empi...

  9. Calibration of miniature prism-based stereoscopic imagers for precise spatial measurements

    Science.gov (United States)

    Machikhin, Alexander S.; Gorevoy, Alexey V.

    2016-04-01

    The paper is targeted to find the optimal mathematical model and the calibration algorithm for the industrial endoscope equipped with the prism-based attachable stereo adapter, which allows imaging from two different points by a single sensor. We consider the conventional calibration methods for the pinhole camera model with polynomial distortion approximation and compared them with the ray tracing model based on the vector form of Snell's law. In order to evaluate each of the proposed models we have developed the software for the imitation of various calibration procedures using different types of calibration targets. We use the computer simulation to prove that the pinhole camera models, widely used in machine vision, are very limited for describing prism-based endoscopic measurement systems. Our analysis identified the main problems for these models, such as entrance pupil shift, non-homocentric beams and required number of coefficients for polynomial models and the iterative forward ray aiming for the ray-tracing model. The proposed technique is flexible and can also be used to test stability and convergence of the parameter estimation procedures and to compare calibration targets and strategies.

  10. Stereoscopic optical viewing system

    Science.gov (United States)

    Tallman, Clifford S.

    1987-01-01

    An improved optical system which provides the operator a stereoscopic viewing field and depth of vision, particularly suitable for use in various machines such as electron or laser beam welding and drilling machines. The system features two separate but independently controlled optical viewing assemblies from the eyepiece to a spot directly above the working surface. Each optical assembly comprises a combination of eye pieces, turning prisms, telephoto lenses for providing magnification, achromatic imaging relay lenses and final stage pentagonal turning prisms. Adjustment for variations in distance from the turning prisms to the workpiece, necessitated by varying part sizes and configurations and by the operator's visual accuity, is provided separately for each optical assembly by means of separate manual controls at the operator console or within easy reach of the operator.

  11. Effective DQE (eDQE) for monoscopic and stereoscopic chest radiography imaging systems with the incorporation of anatomical noise

    Energy Technology Data Exchange (ETDEWEB)

    Boyce, Sarah J. [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 and Department of Biomedical Engineering, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27695 (United States); Choudhury, Kingshuk Roy [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Samei, Ehsan [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Carl E. Ravin Advanced Imaging Laboratories, Department of Biomedical Engineering, Duke University, Durham, North Carolina 27705 (United States); Carl E. Ravin Advanced Imaging Laboratories, Department of Physics, Duke University, Durham, North Carolina 27705 (United States)

    2013-09-15

    Purpose: Stereoscopic chest biplane correlation imaging (stereo/BCI) has been proposed as an alternative modality to single view chest x-ray (CXR). The metrics effective modulation transfer function (eMTF), effective normalized noise power spectrum (eNNPS), and effective detective quantum efficiency (eDQE) have been proposed as clinically relevant metrics for assessing clinical system performance taking into consideration the magnification and scatter effects. This study compared the metrics eMTF, eNNPS, eDQE, and detectability index for stereo/BCI and single view CXR under isodose conditions at two magnifications for two anthropomorphic phantoms of differing sizes.Methods: Measurements for the eMTF were taken for two phantom sizes with an opaque edge test device using established techniques. The eNNPS was measured at two isodose conditions for two phantoms using established techniques. The scatter was measured for two phantoms using an established beam stop method. All measurements were also taken at two different magnifications with two phantoms. A geometrical phantom was used for comparison with prior results for CXR although the results for an anatomy free phantom are not expected to vary for BCI.Results: Stereo/BCI resulted in improved metrics compared to single view CXR. Results indicated that magnification can potentially improve the detection performance primarily due to the air gap which reduced scatter by ∼20%. For both phantoms, at isodose, eDQE(0) for stereo/BCI was ∼100 times higher than that for CXR. Magnification at isodose improved eDQE(0) by ∼10 times for stereo/BCI. Increasing the dose did not improve eDQE. The detectability index for stereo/BCI was ∼100 times better than single view CXR for all conditions. The detectability index was also not improved with increased dose.Conclusions: The findings indicate that stereo/BCI with magnification may improve detectability of subtle lung nodules compared to single view CXR. Results were improved

  12. Stereoscopic High Dynamic Range Video

    OpenAIRE

    Rüfenacht, Dominic

    2011-01-01

    Stereoscopic video content is usually being created by using two or more cameras which are recording the same scene. Traditionally, those cameras have the exact same intrinsic camera parameters. In this project, the exposure times of the cameras differ, allowing to record different parts of the dynamic range of the scene. Image processing techniques are then used to enhance the dynamic range of the captured data. A pipeline for the recording, processing, and displaying of high dynamic range (...

  13. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

    Science.gov (United States)

    Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun

    2015-07-03

    Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

  14. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    Science.gov (United States)

    Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun

    2015-01-01

    Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203

  15. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    Directory of Open Access Journals (Sweden)

    Taekjun Oh

    2015-07-01

    Full Text Available Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

  16. Stereoscopic observations from meteorological satellites

    Science.gov (United States)

    Hasler, A. F.; Mack, R.; Negri, A.

    The capability of making stereoscopic observations of clouds from meteorological satellites is a new basic analysis tool with a broad spectrum of applications. Stereoscopic observations from satellites were first made using the early vidicon tube weather satellites (e.g., Ondrejka and Conover [1]). However, the only high quality meteorological stereoscopy from low orbit has been done from Apollo and Skylab, (e.g., Shenk et al. [2] and Black [3], [4]). Stereoscopy from geosynchronous satellites was proposed by Shenk [5] and Bristor and Pichel [6] in 1974 which allowed Minzner et al. [7] to demonstrate the first quantitative cloud height analysis. In 1978 Bryson [8] and desJardins [9] independently developed digital processing techniques to remap stereo images which made possible precision height measurement and spectacular display of stereograms (Hasler et al. [10], and Hasler [11]). In 1980 the Japanese Geosynchronous Satellite (GMS) and the U.S. GOES-West satellite were synchronized to obtain stereo over the central Pacific as described by Fujita and Dodge [12] and in this paper. Recently the authors have remapped images from a Low Earth Orbiter (LEO) to the coordinate system of a Geosynchronous Earth Orbiter (GEO) and obtained stereoscopic cloud height measurements which promise to have quality comparable to previous all GEO stereo. It has also been determined that the north-south imaging scan rate of some GEOs can be slowed or reversed. Therefore the feasibility of obtaining stereoscopic observations world wide from combinations of operational GEO and LEO satellites has been demonstrated. Stereoscopy from satellites has many advantages over infrared techniques for the observation of cloud structure because it depends only on basic geometric relationships. Digital remapping of GEO and LEO satellite images is imperative for precision stereo height measurement and high quality displays because of the curvature of the earth and the large angular separation of the

  17. A system and method for adjusting and presenting stereoscopic content

    DEFF Research Database (Denmark)

    2013-01-01

    This invention relates to a system for and a method of adjusting and presenting stereoscopic content (100), the method comprising presenting stereoscopic content (100) to a user, where the stereoscopic content (100) comprises a first image part (101) intended to be viewed by one eye of the user...... and a second image part (102) intended to be viewed by the other eye of the user, wherein the presented stereoscopic content (100) has been adjusted by rotating the first image part (101) and the second image part (102) in relation to each other according to an adjustment parameter (ThetaAlpha) derived...... on the basis of one or more vision specific parameters (0M, ThetaMuAlphaChi, ThetaMuIotaNu, DeltaTheta) indicating abnormal vision for the user. In this way, presenting stereoscopic content is enabled that is adjusted specifically to the given person. This may e.g. be used for training purposes or for improved...

  18. Eliminating accommodation-convergence conflicts in stereoscopic displays: Can multiple-focal-plane displays elicit continuous and consistent vergence and accommodation responses?

    Science.gov (United States)

    MacKenzie, Kevin J.; Watt, Simon J.

    2010-02-01

    Conventional stereoscopic displays present images at a fixed focal distance. Depth variations in the depicted scene therefore result in conflicts between the stimuli to vergence and to accommodation. The resulting decoupling of accommodation and vergence responses can cause adverse consequences, including reduced stereo performance, difficulty fusing binocular images, and fatigue and discomfort. These problems could be eliminated if stereo displays could present correct focus cues. A promising approach to achieving this is to present each eye with a sum of images presented at multiple focal planes, and to approximate continuous variations in focal distance by distributing light energy across image planes - a technique referred to as depth-filtering1. Here we describe a novel multi-plane display in which we can measure accommodation and vergence responses. We report an experiment in which we compare these oculomotor responses to real stimuli and depth-filtered simulations of the same distance. Vergence responses were generally similar across conditions. Accommodation responses to depth-filtered images were inaccurate, however, showing an overshoot of the target, particularly in response to a small step-change in stimulus distance. This is surprising because we have previously shown that blur-driven accommodation to the same stimuli, viewed monocularly, is accurate and reliable. We speculate that an initial convergence-driven accommodation response, in combination with a weaker accommodative stimulus from depth-filtered images, leads to this overshoot. Our results suggest that stereoscopic multi-plane displays can be effective, but require smaller image-plane separations than monocular accommodation responses suggest.

  19. Perception of Spatial Features with Stereoscopic Displays.

    Science.gov (United States)

    1980-10-24

    aniseikonia (differences in retinal image size in the two eyes) are of little significance because only monocular perception of the display is required for...perception as a result of such factors as aniseikonia , uncor- rected refractive errors, or phorias results in reduced stereopsis. However, because

  20. Monocular Vision SLAM for Indoor Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Koray Çelik

    2013-01-01

    Full Text Available This paper presents a novel indoor navigation and ranging strategy via monocular camera. By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM. The navigation strategy assumes an indoor or indoor-like manmade environment whose layout is previously unknown, GPS-denied, representable via energy based feature points, and straight architectural lines. We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV with sophisticated on-board image processing and SLAM capabilities. Building and enabling such a small aerial vehicle to fly in tight corridors is a significant technological challenge, especially in the absence of GPS signals and with limited sensing options. Experimental results show that the system is only limited by the capabilities of the camera and environmental entropy.

  1. Visual SLAM for Handheld Monocular Endoscope.

    Science.gov (United States)

    Grasa, Óscar G; Bernal, Ernesto; Casado, Santiago; Gil, Ismael; Montiel, J M M

    2014-01-01

    Simultaneous localization and mapping (SLAM) methods provide real-time estimation of 3-D models from the sole input of a handheld camera, routinely in mobile robotics scenarios. Medical endoscopic sequences mimic a robotic scenario in which a handheld camera (monocular endoscope) moves along an unknown trajectory while observing an unknown cavity. However, the feasibility and accuracy of SLAM methods have not been extensively validated with human in vivo image sequences. In this work, we propose a monocular visual SLAM algorithm tailored to deal with medical image sequences in order to provide an up-to-scale 3-D map of the observed cavity and the endoscope trajectory at frame rate. The algorithm is validated over synthetic data and human in vivo sequences corresponding to 15 laparoscopic hernioplasties where accurate ground-truth distances are available. It can be concluded that the proposed procedure is: 1) noninvasive, because only a standard monocular endoscope and a surgical tool are used; 2) convenient, because only a hand-controlled exploratory motion is needed; 3) fast, because the algorithm provides the 3-D map and the trajectory in real time; 4) accurate, because it has been validated with respect to ground-truth; and 5) robust to inter-patient variability, because it has performed successfully over the validation sequences.

  2. Jig For Stereoscopic Photography

    Science.gov (United States)

    Nielsen, David J.

    1990-01-01

    Separations between views adjusted precisely for best results. Simple jig adjusted to set precisely, distance between right and left positions of camera used to make stereoscopic photographs. Camera slides in slot between extreme positions, where it takes stereoscopic pictures. Distance between extreme positions set reproducibly with micrometer. In view of trend toward very-large-scale integration of electronic circuits, training method and jig used to make training photographs useful to many companies to reduce cost of training manufacturing personnel.

  3. The Exploration of Stereoscopic Imaging Technique in Dual View 60 Co Container Inspection System%双视角60Co集装箱检测系统中立体成像技术

    Institute of Scientific and Technical Information of China (English)

    薛岳; 苗积臣; 吴志芳; 邢桂来

    2014-01-01

    为了克服传统集装箱检测系统难以辨别前后重叠在一起的物体的缺点,双视角60 Co集装箱检测系统将立体成像技术应用在传统的检测系统中。双视角立体成像与分析是系统的核心技术之一,通过研究与比对不同的立体成像方式,针对系统的不同终端,分别应用色分法和时差分像法将系统从不同视角获得的图像分别传送至检测人员的左右眼,使其在大脑中融合成一幅立体图像。论文设计的双视角60 Co集装箱检测系统实现了对集装箱等大型客体的立体成像检测,不仅可以对前后重叠的物体进行有效区分,而且可以生成客体的三维影像,提高了系统的检测效果。%In traditional container inspection system, projection of different objects will overlap, making it diffi-cult to distinguish between them.In order to overcome this shortcoming, stereoscopic imaging is applied in the Binocular 60 Co container inspection system.The binocular stereoscopic imaging is one of the core technologies in the system.By studying and comparing different ways of stereoscopic imaging, according to different termi-nals of the system, anaglyph method and time-sequential method are respectively applied to transmit the image gained from different perspective to the corresponding eye of the inspector, generating stereoscopic image in the brain.The Binocular 60 Co container inspection system designed in this paper implements the stereoscopic image inspection for the containers and other large objects.This system is able to distinguish overlapping objects effec-tively, and detection effectiveness is improved through the stereoscopic image of the objects.

  4. Quantification and recognition of parkinsonian gait from monocular video imaging using kernel-based principal component analysis

    Directory of Open Access Journals (Sweden)

    Chen Shih-Wei

    2011-11-01

    Full Text Available Abstract Background The computer-aided identification of specific gait patterns is an important issue in the assessment of Parkinson's disease (PD. In this study, a computer vision-based gait analysis approach is developed to assist the clinical assessments of PD with kernel-based principal component analysis (KPCA. Method Twelve PD patients and twelve healthy adults with no neurological history or motor disorders within the past six months were recruited and separated according to their "Non-PD", "Drug-On", and "Drug-Off" states. The participants were asked to wear light-colored clothing and perform three walking trials through a corridor decorated with a navy curtain at their natural pace. The participants' gait performance during the steady-state walking period was captured by a digital camera for gait analysis. The collected walking image frames were then transformed into binary silhouettes for noise reduction and compression. Using the developed KPCA-based method, the features within the binary silhouettes can be extracted to quantitatively determine the gait cycle time, stride length, walking velocity, and cadence. Results and Discussion The KPCA-based method uses a feature-extraction approach, which was verified to be more effective than traditional image area and principal component analysis (PCA approaches in classifying "Non-PD" controls and "Drug-Off/On" PD patients. Encouragingly, this method has a high accuracy rate, 80.51%, for recognizing different gaits. Quantitative gait parameters are obtained, and the power spectrums of the patients' gaits are analyzed. We show that that the slow and irregular actions of PD patients during walking tend to transfer some of the power from the main lobe frequency to a lower frequency band. Our results indicate the feasibility of using gait performance to evaluate the motor function of patients with PD. Conclusion This KPCA-based method requires only a digital camera and a decorated corridor setup

  5. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    Science.gov (United States)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  6. 一种基于人眼视觉特性的立体图像质量客观评价方法%A Quality Assessment Method of Stereoscopic Images Based on Human Visual System

    Institute of Scientific and Technical Information of China (English)

    王阿红; 郁梅; 彭宗举; 王旭; 蒋刚毅; 周俊明; 邵枫

    2011-01-01

    Through simulating contrast sensitivity function, multi-channel effects and stereoscopic perception of human visual system, a quality assessment method of stereoscopic images based on human visual system is proposed. In the quality evaluation of left and right images by computing Canberra distance, wavelet transform was used to simulate multi-channel effects and wavelet coefficients of different spatial frequency were weighted according to the contrast sensitivity function. The proposed method calculated similarity of the absolute disparity images of the original and test stereoscopic images to assess the stereoscopic perception. Then regression analysis was used to integrate the two evaluation results into an equation as our quality assessment model of stereoscopic images. Experimental results show that the proposed objective method achieves consistent stereoscopic image quality evaluation results with subjective assessment.%通过模拟人眼视觉特性中的对比度敏感函数、多通道效应以及立体感知特性,提出了一种基于人眼视觉特性的立体图像质量客观评价方法.在评价左右图像质量时,利用小波变换模拟人眼视觉特性中的多通道效应,不同空间频带的小波系数按对比度敏感函数进行加权,左右图像质量度量采用Canberra $巨离.在评价立体感知时,则通过计算原始与测试左右图像的绝对差值图相似度来实现.其后,通过回归分析将左右图像质量和立体感知评价结果拟合成为所需的立体图像质量客观评价模型.实验结果表明该模型与主观评价结果具有较好的一致性.

  7. Enhanced perception of terrain hazards in off-road path choice: stereoscopic 3D versus 2D displays

    Science.gov (United States)

    Merritt, John O.; CuQlock-Knopp, V. Grayson; Myles, Kimberly

    1997-06-01

    Off-road mobility at night is a critical factor in modern military operations. Soldiers traversing off-road terrain, both on foot and in combat vehicles, often use 2D viewing devices (such as a driver's thermal viewer, or biocular or monocular night-vision goggles) for tactical mobility under low-light conditions. Perceptual errors can occur when 2D displays fail to convey adequately the contours of terrain. Some off-road driving accidents have been attributed to inadequate perception of terrain features due to using 2D displays (which do not provide binocular-parallax cues to depth perception). In this study, photographic images of terrain scenes were presented first in conventional 2D video, and then in stereoscopic 3D video. The percentage of possible correct answers for 2D and 3D were: 2D pretest equals 52%, 3D pretest equals 80%, 2D posttest equals 48%, 3D posttest equals 78%. Other recent studies conducted at the US Army Research Laboratory's Human Research and Engineering Directorate also show that stereoscopic 3D displays can significantly improve visual evaluation of terrain features, and thus may improve the safety and effectiveness of military off-road mobility operation, both on foot and in combat vehicles.

  8. 基于感知重要性的立体图像质量评价方法%An Objective Quality Assessment Metric for Stereoscopic Images Based on Perceptual Significance

    Institute of Scientific and Technical Information of China (English)

    段芬芳; 邵枫; 蒋刚毅; 郁梅; 李福翠

    2013-01-01

    Stereoscopic image quality assessment is an effective way to evaluate the performance of stereoscopic video system. However, how to utilize human visual characteristics in quality assessment is still an unsolved issue. In this paper, an objective stereoscopic image quality assessment method is proposed based on perceptual significance. Firstly, by analyzing the effects of visual saliency and distortion on perceptual quality, we construct perceptual significance model of stereoscopic images. Then, we separate the stereoscopic image into four types of regions:salient distortion region, salient non-distortion region, non-salient distortion region and non-salient non-distortion region, and evaluate them independently. Finally, all evaluation results are integrated into an overall score. Experimental results show that the proposed method can achieve higher consistency with the subjective assessment of stereoscopic images and effectively reflect human visual system.%立体图像质量评价是评价立体视频系统性能的有效途径,而如何对立体图像质量进行有效的客观评价是目前的研究难点。本文提出了一种基于感知重要性的立体图像质量评价方法。该评价方法通过分析视觉显著和失真对感知质量的影响,建立立体图像视觉感知重要性模型,将立体图像分为四类区域:显著失真区域、显著非失真区域、非显著失真区域和非显著非失真区域,然后对各个区域分别进行评价,最后通过对各个区域赋予不同的权值从而预测得到最终的客观评价值。实验结果表明,该方法与主观评价结果有较好的相关性,符合人眼视觉系统。

  9. Evaluating methods for controlling depth perception in stereoscopic cinematography

    Science.gov (United States)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography

  10. Saccade amplitude disconjugacy induced by aniseikonia: role of monocular depth cues.

    Science.gov (United States)

    Pia Bucci, M; Kapoula, Z; Eggert, T

    1999-09-01

    The conjugacy of saccades is rapidly modified if the images are made unequal for the two eyes. Disconjugacy persists even in the absence of disparity which indicates learning. Binocular visual disparity is a major cue to depth and is believed to drive the disconjugacy of saccades to aniseikonic images. The goal of the present study was to test whether monocular depth cues can also influence the disconjugacy of saccades. Three experiments were performed in which subjects were exposed for 15-20 min to a 10% image size inequality. Three different images were used: a grid that contained a single monocular depth cue strongly indicating a frontoparallel plane; a random-dot pattern that contained a less prominent monocular depth cue (absence of texture gradient) which also indicates the frontoparallel plane; and a complex image with several overlapping geometric forms that contained a variety of monocular depth cues. Saccades became disconjugate in all three experiments. The disconjugacy was larger and more persistent for the experiment using the random-dot pattern that had the least prominent monocular depth cues. The complex image which had a large variety of monocular depth cues produced the most variable and less persistent disconjugacy. We conclude that the monocular depth cues modulate the disconjugacy of saccades stimulated by the disparity of aniseikonic images.

  11. Alternation Frequency Thresholds for Stereopsis as a Technique for Exploring Stereoscopic Difficulties

    Directory of Open Access Journals (Sweden)

    Svetlana Rychkova

    2011-01-01

    Full Text Available When stereoscopic images are presented alternately to the two eyes, stereopsis occurs at F ⩾ 1 Hz full-cycle frequencies for very simple stimuli, and F ⩾ 3 Hz full-cycle frequencies for random-dot stereograms (eg Ludwig I, Pieper W, Lachnit H, 2007 “Temporal integration of monocular images separated in time: stereopsis, stereoacuity, and binocular luster” Perception & Psychophysics 69 92–102. Using twenty different stereograms presented through liquid crystal shutters, we studied the transition to stereopsis with fifteen subjects. The onset of stereopsis was observed during a stepwise increase of the alternation frequency, and its disappearance was observed during a stepwise decrease in frequency. The lowest F values (around 2.5 Hz were observed with stimuli involving two to four simple disjoint elements (circles, arcs, rectangles. Higher F values were needed for stimuli containing slanted elements or curved surfaces (about 1 Hz increment, overlapping elements at two different depths (about 2.5 Hz increment, or camouflaged overlapping surfaces (> 7 Hz increment. A textured cylindrical surface with a horizontal axis appeared easier to interpret (5.7 Hz than a pair of slanted segments separated in depth but forming a cross in projection (8 Hz. Training effects were minimal, and F usually increased as disparities were reduced. The hierarchy of difficulties revealed in the study may shed light on various problems that the brain needs to solve during stereoscopic interpretation. During the construction of the three-dimensional percept, the loss of information due to natural decay of the stimuli traces must be compensated by refreshes of visual input. In the discussion an attempt is made to link our results with recent advances in the comprehension of visual scene memory.

  12. Toward an impairment metric for stereoscopic video: a full-reference video quality metric to assess compressed stereoscopic video.

    Science.gov (United States)

    De Silva, Varuna; Arachchi, Hemantha Kodikara; Ekmekcioglu, Erhan; Kondoz, Ahmet

    2013-09-01

    The quality assessment of impaired stereoscopic video is a key element in designing and deploying advanced immersive media distribution platforms. A widely accepted quality metric to measure impairments of stereoscopic video is, however, still to be developed. As a step toward finding a solution to this problem, this paper proposes a full reference stereoscopic video quality metric to measure the perceptual quality of compressed stereoscopic video. A comprehensive set of subjective experiments is performed with 14 different stereoscopic video sequences, which are encoded using both the H.264 and high efficiency video coding compliant video codecs, to develop a subjective test results database of 116 test stimuli. The subjective results are analyzed using statistical techniques to uncover different patterns of subjective scoring for symmetrically and asymmetrically encoded stereoscopic video. The subjective result database is subsequently used for training and validating a simple but effective stereoscopic video quality metric considering heuristics of binocular vision. The proposed metric performs significantly better than state-of-the-art stereoscopic image and video quality metrics in predicting the subjective scores. The proposed metric and the subjective result database will be made publicly available, and it is expected that the proposed metric and the subjective assessments will have important uses in advanced 3D media delivery systems.

  13. Black and white digital photography and its stereoscopic impression

    OpenAIRE

    Liu, Jing; YING Guohu

    2012-01-01

    As a manifestation of the two-dimensional visual arts, authentic reproduction of the stereoscopic effect of three-dimensional space has long been valued and striven to explore by artists and lovers of photography. Black and white image with its single color, simple lighting, simple schema to reflect the screen space stereoscopic visual experience, shows a unique artistic charm. Digital image processing technology being continuously improved and increasingly sophisticated provides a convenient...

  14. Surface area and volume measurements of volcanic ash particles using micro-computed tomography (micro-CT): A comparison with scanning electron microscope (SEM) stereoscopic imaging and geometric considerations

    Science.gov (United States)

    Ersoy, Orkun; Şen, Erdal; Aydar, Erkan; Tatar, İlkan; Çelik, H. Hamdi

    2010-10-01

    Volcanic ash particles are important components of explosive eruptions, and their surface textures are the subject of intense research. Characterization of ash surfaces is crucial for understanding the physics of volcanic plumes, remote sensing measurements of ash and aerosols, interfacial processes, modelling transportation and deposition of tephra and characterizing eruptive styles. A number of different methods have been used over the years to arrive at surface area estimates. The more common methods include estimates based on geometric considerations (geometric surface area) and physisorption of gas molecules on the surface of interest (physical surface area). In this study, micro computed tomography (micro-CT), which is a non-destructive method providing three-dimensional data, enabled the measurement of surface area and volume of individual ash particles. Results were compared with the values obtained from SEM stereoscopic imaging and geometric considerations. Surface area estimates of micro-CT and SEM stereoscopic imaging are similar, with surface area/volume ratios (SA/V) of 0.0368 and 0.0467, respectively. Ash particle surface textures show a large deviation from that of simple geometric forms, and an approximation both to spheres and ellipsoids do not seem adequate for the representation of ash surface. SEM stereoscopic and/or micro-CT imaging are here suggested as good candidate techniques for the characterization of textures on macro-pore regions of ash particles.

  15. Depth reversals in stereoscopic displays driven by apparent size

    Science.gov (United States)

    Sacher, Gunnar; Hayes, Amy; Thornton, Ian M.; Sereno, Margaret E.; Malony, Allen D.

    1998-04-01

    In visual scenes, depth information is derived from a variety of monocular and binocular cues. When in conflict, a monocular cue is sometimes able to override the binocular information. We examined the accuracy of relative depth judgments in orthographic, stereoscopic displays and found that perceived relative size can override binocular disparity as a depth cue in a situation where the relative size information is itself generated from disparity information, not from retinal size difference. A size discrimination task confirmed the assumption that disparity information was perceived and used to generate apparent size differences. The tendency for the apparent size cue to override disparity information can be modulated by varying the strength of the apparent size cue. In addition, an analysis of reaction times provides supporting evidence for this novel depth reversal effect. We believe that human perception must be regarded as an important component of stereoscopic applications. Hence, if applications are to be effective and accurate, it is necessary to take into account the richness and complexity of the human visual perceptual system that interacts with them. We discuss implications of this and similar research for human performance in virtual environments, the design of visual presentations for virtual worlds, and the design of visualization tools.

  16. Panoramic Stereoscopic Video System for Remote-Controlled Robotic Space Operations Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This Phase I project will demonstrate the feasibility of providing panoramic stereoscopic images for remote-controlled robotic space operations using three...

  17. Application of 3D stereoscopic visualization technology in casting aspect

    Institute of Scientific and Technical Information of China (English)

    Kang Jinwu; Zhang Xiaopeng; Zhang Chi; Liu Baicheng

    2014-01-01

    3D stereoscopic visualization technology is coming into more and more common use in the field of entertainment, and this technology is also beginning to cut a striking ifgure in casting industry and scientiifc research. The history, fundamental principle, and devices of 3D stereoscopic visualization technology are reviewed in this paper. The authors’ research achievements on the 3D stereoscopic visualization technology in the modeling and simulation of the casting process are presented. This technology can be used for the observation of complex 3D solid models of castings and the simulated results of solidiifcation processes such as temperature, lfuid lfow, displacement, stress strain and microstructure, as wel as the predicted defects such as shrinkage/porosity, cracks, and deformation. It can also be used for other areas relating to 3D models, such as assembling of dies, cores, etc. Several cases are given to compare the illustration of simulated results by traditional images and red-blue 3D stereoscopic images. The spatial shape is observed better by the new method. The prospect of 3D stereoscopic visualization in the casting aspect is discussed as wel. The need for aided-viewing devices is stil the most prominent problem of 3D stereoscopic visualization technology. However, 3D stereoscopic visualization represents the tendency of visualization technology in the future; and as the problem is solved in the years ahead, great breakthroughs wil certainly be made for its application in casting design and modeling and simulation of the casting processes.

  18. Monocular 3D display system for presenting correct depth

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-10-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  19. NUKE 软件转制立体影像的技术分析%The Technical Analysis on the Conversion of Stereoscopic Image in the Software of NUKE

    Institute of Scientific and Technical Information of China (English)

    何小凡

    2014-01-01

    后期转制立体影像的方法是当下快速满足市场对立体影视片源大量需求的一剂灵药。通过专业软件NUKE的再制作,将原本普通的画面逐帧制作纵向深度从而变为立体影像,这一技术主要分为ROTO (拆分)、DEPTH(纵向深度)、CLEAN PLATE(填补镜头穿帮)、CONVERT(转换)四个主要制作流程。%The approach of the conversion of stereoscopic image in a later stage is regarded as a panacea which helps efficiently meet the massive demand of stereoscopic movie resources in the current market.With the help of reproduction made by the professional software of NUKE,this technique changes the original common pic-tures into stereoscopic image through making vertical depth frame by frame.This process is divided into such four major manufacturing processes as ROTO,DEPTH,CLEAN PLATE,and CONVERT.

  20. Stereoscopic Vision System For Robotic Vehicle

    Science.gov (United States)

    Matthies, Larry H.; Anderson, Charles H.

    1993-01-01

    Distances estimated from images by cross-correlation. Two-camera stereoscopic vision system with onboard processing of image data developed for use in guiding robotic vehicle semiautonomously. Combination of semiautonomous guidance and teleoperation useful in remote and/or hazardous operations, including clean-up of toxic wastes, exploration of dangerous terrain on Earth and other planets, and delivery of materials in factories where unexpected hazards or obstacles can arise.

  1. Objective Stereoscopic Image Quality Assessment Model Based on Support Vector Regression%基于支持向量回归的立体图像客观质量评价模型

    Institute of Scientific and Technical Information of China (English)

    顾珊波; 邵枫; 蒋刚毅; 郁梅

    2012-01-01

    Stereoscopic image quality assessment is an effective way to evaluate the performance of stereoscopic video system. However, how to use human visual characteristics effectively is still a research focus in objective stereoscopic image quality. In this paper, combining with the stability characteristics of singular values and subjective visual characteristics of stereoscopic images, an objective stereoscopic image quality assessment model based on Support Vector Regression (SVR) is proposed. In the model, firstly, stereoscopic features are obtained by extracting singular values of left and right images. Secondly, the features are fused according to different types of distortion. Finally, the values of objective assessment are predicted by SVR. Experimental results show that, by applying the proposed model to stereoscopic test database, Person's correlation coefficient index reaches to 0.93, Ranked correlation coefficient index reaches to 0.94, Root Mean Square Error (RMSE) index approaches to 6, and outlier ratio index reaches to 0.00%, which indicate that the model is fairly good and can predict human visual perception very well.%立体图像质量评价是评价立体视频系统性能的有效途径,而如何利用人类视觉特性对立体图像质量进行有效评价是目前的研究难点.该文根据图像奇异值有较强稳定性的特点,结合立体图像的主观视觉特性,提出了一种基于支持向量回归(Support Vector Regression,SVR)的立体图像客观质量评价模型.该模型通过分析立体图像的视觉特性,提取左右图像的奇异值作为立体图像的特征信息,然后根据立体图像的不同失真类型情况对其特征进行融合,通过SVR预测得到立体图像质量的客观评价值.实验结果表明,采用该文提出的客观评价模型对立体数据测试库进行评价,Pearson线性相关系数值在0.93以上,Spearman等级相关系数值在0.94以上,均方根误差值接近6,异常值比率值为0

  2. Three-dimensional tracking of multiple skin-colored regions by a moving stereoscopic system.

    Science.gov (United States)

    Argyros, Antonis A; Lourakis, Manolis I A

    2004-01-10

    A system that performs three-dimensional (3D) tracking of multiple skin-colored regions (SCRs) in images acquired by a calibrated, possibly moving stereoscopic rig is described. The system consists of a collection of techniques that permit the modeling and detection of SCRs, the determination of their temporal association in monocular image sequences, the establishment of their correspondence between stereo images, and the extraction of their 3D positions in a world-centered coordinate system. The development of these techniques has been motivated by the need for robust, near-real-time tracking performance. SCRs are detected by use of a Bayesian classifier that is trained with the aid of a novel technique. More specifically, the classifier is bootstrapped with a small set of training data. Then, as new images are being processed, an iterative training procedure is employed to refine the classifier. Furthermore, a technique is proposed to enable the classifier to cope with changes in illumination. Tracking of SCRs in time as well as matching of SCRs in the images of the employed stereo rig is performed through computationally inexpensive and robust techniques. One of the main characteristics of the skin-colored region tracker (SCRT) instrument is its ability to report the 3D positions of SCRs in a world-centered coordinate system by employing a possibly moving stereo rig with independently verging CCD cameras. The system operates on images of dimensions 640 x 480 pixels at a rate of 13 Hz on a conventional Pentium 4 processor at 1.8 GHz. Representative experimental results from the application of the SCRT to image sequences are also provided.

  3. Paradoxical fusion of two images and depth perception with a squinting eye.

    Science.gov (United States)

    Rychkova, S I; Ninio, J

    2009-03-01

    Some strabismic patients with inconstant squint can fuse two images in a single eye, and experience lustre and depth. One of these images is foveal and the other extrafoveal. Depth perception was tested on 30 such subjects. Relief was perceived mostly on the fixated image. Camouflaged continuous surfaces (hemispheres, cylinders) were perceived as bumps or hollows, without detail. Camouflaged rectangles could not be separated in depth from the background, while their explicit counterparts could. Slanted bars were mostly interpreted as frontoparallel near or remote bars. Depth responses were more frequent with stimuli involving inward rather than outward disparities, and were then heavily biased towards "near" judgements. All monocular fusion effects were markedly reduced after the recovery of normal stereoscopic vision following an orthoptic treatment. The depth effects reported here may provide clues on what stereoscopic pathways may or may not accomplish with incomplete retinal and misleading vergence information.

  4. A method for stereoscopic strain analysis of the right ventricle by digital image correlation during coronary bypass surgery: short communication.

    Science.gov (United States)

    Mirow, Nikolas; Hokka, Mikko; Nagel, Horst; Irqsusi, Marc; Moosdorf, Rainer G; Kuokkala, Veli-Tapani; Vogt, Sebastian

    2015-06-01

    Perioperative cardiosurgical management of volume therapy remains one of the challenging tasks in cases of patients with severe heart disease. Early detection of congestive cardiac failure prevents subsequent low output and worse outcome. An effective method for controlling extracorporeal circulation is created by developing a non-invasive intraoperative method for right ventricular strain analysis through digital image contrast correlation.

  5. The technology of multiuser large display area and auto free-viewing stereoscopic display

    Science.gov (United States)

    Zhao, Tian-Qi; Zhang, He-Ling; Han, Jing

    2010-11-01

    No-glasses optical grating stereoscopic display is one of a chief development of stereoscopic display, but it is always confined by the range of stereoscopic visible and quantity of stereoscopic information and quantity of users. This research use the combination of Fresnel lens array and controllable point lights to output information of the two eyes of different users separately. Combining the technology of eyes-tracking, it can make no-glasses optical grating stereoscopic display be visible in 3D orientation range by multiuser in the condition of two-angle image sources. And it also can be visible in 360° stereoscopic overlook by one user in the condition of multi-angle image sources.

  6. Stereoscopic Video Weld-Seam Tracker

    Science.gov (United States)

    Kennedy, Larry Z.

    1991-01-01

    Stereoscopic video camera and laser illuminator operates in conjunction with image-data-processing computer to locate weld seam and to map surface features in vicinity of seam. Intended to track seams to guide placement of welding torch in automatic welding system and to yield information on qualities of welds. More sensitive than prior optical seam trackers and suitable for use in production environment. Tracks nearly invisible gap between butted machined edges of two plates.

  7. A smart telerobotic system driven by monocular vision

    Science.gov (United States)

    Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.

    1994-01-01

    A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.

  8. Stereoscopic system for measuring particle trajectories past an underwater model

    Science.gov (United States)

    Liu, H.-T.; Weissman, Michael A.; White, Gary B.; Miner, G. E.; Gustafson, William T.

    1994-04-01

    A stereoscopic system was developed that integrates hardware and software components for image acquisition, digitization, processing, display, and measurements. The model-induced trajectories of nearly neutrally buoyant fluorescent particles, illuminated with a 15-W pulsed copper vapor laser, are tracked in a towing tank by stereoscopic time-lapse photography using two 35-mm cameras positioned at a 90-degree angle from the top and the side. A C program, HI, drives two data I/O boards hosted in a PC to set up the run parameters, control the operations of the laser and camera shutters, and acquire the stereo images. The photographic records are digitized and processed to derive the centroids of reference marks and particle images. The centroids are then fed into a Windows-based program, Track/3D, to perform image correlation, correction for image distortion, stereo conversion, stereoscopic display, and measurements. The display module incorporates a graphics library that drives a stereoscopic display adapter attached to a monitor; the stereogram must be viewed with polarizing glasses. Functions are available for image translation, rotation, zooming, and on- screen measurements. The velocity and acceleration components of the 3-D flow field induced by the model are derived from the trajectories, serving as a basis for whole-field stereoscopic quantitative flow visualization.

  9. Surface area and volume measurements of volcanic ash particles using micro-computed tomography (micro-CT): A comparison with scanning electron microscope (SEM) stereoscopic imaging and Brunauer-Emmett-Teller (BET) model

    Science.gov (United States)

    Ersoy, Orkun; Şen, Erdal; Aydar, Erkan; Tatar, Ä.°Lkan; Ćelik, H. Hamdi

    2010-05-01

    Volcanic ash particles are important components of explosive eruptions and their surface texture is the subject of intense research. Characterization of ash surfaces is crucial for understanding the physics of the volcanic plumes, remote sensing measurements of ash and aerosols, interfacial processes, modelling transportation and deposition of tephra and characterizing eruptive styles. A number of different methods have been used over the years to arrive at surface area estimates. The more common methods include estimates based on the geometric considerations (geometric surface area) and the physisorption of gas molecules on the surface of interest (physical surface area). In this study, micro computed tomography (micro-CT), a non-destructive method providing three-dimensional data enabled the measurement of surface areas and volumes of individual ash particles. Specific surface area estimates for ash particles were also obtained using nitrogen as gas adsorbent and the BET (Brunauer-Emmett-Teller) model. Results were compared with the values obtained from SEM stereoscopic imaging and geometric considerations. Surface area estimates of micro-CT and SEM stereoscopic imaging overlaps with mean specific surface area results of 0.0167 and 0.0214 m2/g, respectively. However, ash particle surface textures present quite a deviation from that of their geometric forms and approximation to sphere and ellipsoid both seemed to be inadequate for representation of real ash surfaces. The higher surface area estimate (> 0.4 m2/g) obtained from the technique based on physical sorption of gases (BET model here) was attributed to its capability for surface areas associated even with angstrom-sized pores. SEM stereoscopic and/or micro-CT imaging were suggested for characterization of textures on macro-pore regions of ash particles.

  10. Dynamic object recognition and tracking of mobile robot by monocular vision

    Science.gov (United States)

    Liu, Lei; Wang, Yongji

    2007-11-01

    Monocular Vision is widely used in mobile robot's motion control for its simple structure and easy using. An integrated description to distinguish and tracking the specified color targets dynamically and precisely by the Monocular Vision based on the imaging principle is the major topic of the paper. The mainline is accordance with the mechanisms of visual processing strictly, including the pretreatment and recognition processes. Specially, the color models are utilized to decrease the influence of the illumination in the paper. Some applied algorithms based on the practical application are used for image segmentation and clustering. After recognizing the target, however the monocular camera can't get depth information directly, 3D Reconstruction Principle is used to calculate the distance and direction from robot to target. To emend monocular camera reading, the laser is used after vision measuring. At last, a vision servo system is designed to realize the robot's dynamic tracking to the moving target.

  11. Stereoscopic Offset Makes Objects Easier to Recognize.

    Science.gov (United States)

    Caziot, Baptiste; Backus, Benjamin T

    2015-01-01

    Binocular vision is obviously useful for depth perception, but it might also enhance other components of visual processing, such as image segmentation. We used naturalistic images to determine whether giving an object a stereoscopic offset of 15-120 arcmin of crossed disparity relative to its background would make the object easier to recognize in briefly presented (33-133 ms), temporally masked displays. Disparity had a beneficial effect across a wide range of disparities and display durations. Most of this benefit occurred whether or not the stereoscopic contour agreed with the object's luminance contour. We attribute this benefit to an orienting of spatial attention that selected the object and its local background for enhanced 2D pattern processing. At longer display durations, contour agreement provided an additional benefit, and a separate experiment using random-dot stimuli confirmed that stereoscopic contours plausibly contributed to recognition at the longer display durations in our experiment. We conclude that in real-world situations binocular vision confers an advantage not only for depth perception, but also for recognizing objects from their luminance patterns and bounding contours.

  12. Stereoscopic Offset Makes Objects Easier to Recognize.

    Directory of Open Access Journals (Sweden)

    Baptiste Caziot

    Full Text Available Binocular vision is obviously useful for depth perception, but it might also enhance other components of visual processing, such as image segmentation. We used naturalistic images to determine whether giving an object a stereoscopic offset of 15-120 arcmin of crossed disparity relative to its background would make the object easier to recognize in briefly presented (33-133 ms, temporally masked displays. Disparity had a beneficial effect across a wide range of disparities and display durations. Most of this benefit occurred whether or not the stereoscopic contour agreed with the object's luminance contour. We attribute this benefit to an orienting of spatial attention that selected the object and its local background for enhanced 2D pattern processing. At longer display durations, contour agreement provided an additional benefit, and a separate experiment using random-dot stimuli confirmed that stereoscopic contours plausibly contributed to recognition at the longer display durations in our experiment. We conclude that in real-world situations binocular vision confers an advantage not only for depth perception, but also for recognizing objects from their luminance patterns and bounding contours.

  13. Black and white digital photography and its stereoscopic impression

    Directory of Open Access Journals (Sweden)

    LIU Jing

    2012-08-01

    Full Text Available As a manifestation of the two-dimensional visual arts, authentic reproduction of the stereoscopic effect of three-dimensional space has long been valued and striven to explore by artists and lovers of photography. Black and white image with its single color, simple lighting, simple schema to reflect the screen space stereoscopic visual experience, shows a unique artistic charm. Digital image processing technology being continuously improved and increasingly sophisticated provides a convenient post-processing and creation for three-dimensional rendering of digital images.

  14. No-Reference Stereoscopic IQA Approach: From Nonlinear Effect to Parallax Compensation

    Directory of Open Access Journals (Sweden)

    Ke Gu

    2012-01-01

    Full Text Available The last decade has seen a booming of the applications of stereoscopic images/videos and the corresponding technologies, such as 3D modeling, reconstruction, and disparity estimation. However, only a very limited number of stereoscopic image quality assessment metrics was proposed through the years. In this paper, we propose a new no-reference stereoscopic image quality assessment algorithm based on the nonlinear additive model, ocular dominance model, and saliency based parallax compensation. Our studies using the Toyama database result in three valuable findings. First, quality of the stereoscopic image has a nonlinear relationship with a direct summation of two monoscopic image qualities. Second, it is a rational assumption that the right-eye response has the higher impact on the stereoscopic image quality, which is based on a sampling survey in the ocular dominance research. Third, the saliency based parallax compensation, resulted from different stereoscopic image contents, is considerably valid to improve the prediction performance of image quality metrics. Experimental results confirm that our proposed stereoscopic image quality assessment paradigm has superior prediction accuracy as compared to state-of-the-art competitors.

  15. fMRI investigation of monocular pattern rivalry.

    Science.gov (United States)

    Mendola, Janine D; Buckthought, Athena

    2013-01-01

    In monocular pattern rivalry, a composite image is shown to both eyes. The patient experiences perceptual alternations in which the two stimulus components alternate in clarity or salience. We used fMRI at 3T to image brain activity while participants perceived monocular rivalry passively or indicated their percepts with a task. The stimulus patterns were left/right oblique gratings, face/house composites, or a nonrivalrous control stimulus that did not support the perception of transparency or image segmentation. All stimuli were matched for luminance, contrast, and color. Compared with the control stimulus, the cortical activation for passive viewing of grating rivalry included dorsal and ventral extrastriate cortex, superior and inferior parietal regions, and multiple sites in frontal cortex. When the BOLD signal for the object rivalry task was compared with the grating rivalry task, a similar whole-brain network was engaged, but with significantly greater activity in extrastriate regions, including V3, V3A, fusiform face area (FFA), and parahippocampal place area (PPA). In addition, for the object rivalry task, FFA activity was significantly greater during face-dominant periods whereas parahippocampal place area activity was greater during house-dominant periods. Our results demonstrate that slight stimulus changes that trigger monocular rivalry recruit a large whole-brain network, as previously identified for other forms of bistability. Moreover, the results indicate that rivalry for complex object stimuli preferentially engages extrastriate cortex. We also establish that even with natural viewing conditions, endogenous attentional fluctuations in monocular pattern rivalry will differentially drive object-category-specific cortex, similar to binocular rivalry, but without complete suppression of the nondominant image.

  16. Disambiguating Stereoscopic Transparency Using a Thaumatrope Approach.

    Science.gov (United States)

    Yan-Jen Su; Yung-Yu Chuang

    2015-08-01

    Volume rendering is a popular visualization technique for scientific computing and medical imaging. By assigning proper transparency, it allows us to see more information inside the volume. However, because volume rendering projects complex 3D structures into the 2D domain, the resultant visualization often suffers from ambiguity and its spatial relationship could be difficult to recognize correctly, especially when the scene or setting is highly transparent. Stereoscopic displays are not the rescue to the problem even though they add an additional dimension which seems helpful for resolving the ambiguity. This paper proposes a thaumatrope method to enhance 3D understanding with stereoscopic transparency for volume rendering. Our method first generates an additional cue with less spatial ambiguity by using a high opacity setting. To avoid cluttering the actual content, we only select its prominent feature for displaying. By alternating the actual content and the selected feature quickly, the viewer only perceives a whole volume while its spatial understanding has been enhanced. A user study was performed to compare the proposed method with the original stereoscopic volume rendering and the static combination of the actual content and the selected feature using a 3D display. Results show that the proposed thaumatrope approach provides better spatial understanding than compared approaches.

  17. Digital stereoscopic cinema: the 21st century

    Science.gov (United States)

    Lipton, Lenny

    2008-02-01

    Over 1000 theaters in more than a dozen countries have been outfitted with digital projectors using the Texas Instruments DLP engine equipped to show field-sequential 3-D movies using the polarized method of image selection. Shuttering eyewear and advanced anaglyph products are also being deployed for image selection. Many studios are in production with stereoscopic films, and some have committed to producing their entire output of animated features in 3-D. This is a time of technology change for the motion picture industry.

  18. Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues.

    Science.gov (United States)

    Warren, Paul A; Rushton, Simon K

    2009-05-01

    We have recently suggested that the brain uses its sensitivity to optic flow in order to parse retinal motion into components arising due to self and object movement (e.g. Rushton, S. K., & Warren, P. A. (2005). Moving observers, 3D relative motion and the detection of object movement. Current Biology, 15, R542-R543). Here, we explore whether stereo disparity is necessary for flow parsing or whether other sources of depth information, which could theoretically constrain flow-field interpretation, are sufficient. Stationary observers viewed large field of view stimuli containing textured cubes, moving in a manner that was consistent with a complex observer movement through a stationary scene. Observers made speeded responses to report the perceived direction of movement of a probe object presented at different depths in the scene. Across conditions we varied the presence or absence of different binocular and monocular cues to depth order. In line with previous studies, results consistent with flow parsing (in terms of both perceived direction and response time) were found in the condition in which motion parallax and stereoscopic disparity were present. Observers were poorer at judging object movement when depth order was specified by parallax alone. However, as more monocular depth cues were added to the stimulus the results approached those found when the scene contained stereoscopic cues. We conclude that both monocular and binocular static depth information contribute to flow parsing. These findings are discussed in the context of potential architectures for a model of the flow parsing mechanism.

  19. Efficient stereoscopic contents file format on the basis of ISO base media file format

    Science.gov (United States)

    Kim, Kyuheon; Lee, Jangwon; Suh, Doug Young; Park, Gwang Hoon

    2009-02-01

    A lot of 3D contents haven been widely used for multimedia services, however, real 3D video contents have been adopted for a limited applications such as a specially designed 3D cinema. This is because of the difficulty of capturing real 3D video contents and the limitation of display devices available in a market. However, diverse types of display devices for stereoscopic video contents for real 3D video contents have been recently released in a market. Especially, a mobile phone with a stereoscopic camera has been released in a market, which provides a user as a consumer to have more realistic experiences without glasses, and also, as a content creator to take stereoscopic images or record the stereoscopic video contents. However, a user can only store and display these acquired stereoscopic contents with his/her own devices due to the non-existence of a common file format for these contents. This limitation causes a user not share his/her contents with any other users, which makes it difficult the relevant market to stereoscopic contents is getting expanded. Therefore, this paper proposes the common file format on the basis of ISO base media file format for stereoscopic contents, which enables users to store and exchange pure stereoscopic contents. This technology is also currently under development for an international standard of MPEG as being called as a stereoscopic video application format.

  20. [Dendrobium officinale stereoscopic cultivation method].

    Science.gov (United States)

    Si, Jin-Ping; Dong, Hong-Xiu; Liao, Xin-Yan; Zhu, Yu-Qiu; Li, Hui

    2014-12-01

    The study is aimed to make the most of available space of Dendrobium officinale cultivation facility, reveal the yield and functional components variation of stereoscopic cultivated D. officinale, and improve quality, yield and efficiency. The agronomic traits and yield variation of stereoscopic cultivated D. officinale were studied by operating field experiment. The content of polysaccharide and extractum were determined by using phenol-sulfuric acid method and 2010 edition of "Chinese Pharmacopoeia" Appendix X A. The results showed that the land utilization of stereoscopic cultivated D. officinale increased 2.74 times, the stems, leaves and their total fresh or dry weight in unit area of stereoscopic cultivated D. officinale were all heavier than those of the ground cultivated ones. There was no significant difference in polysaccharide content between stereoscopic cultivation and ground cultivation. But the extractum content and total content of polysaccharide and extractum were significantly higher than those of the ground cultivated ones. In additional, the polysaccharide content and total content of polysaccharide and extractum from the top two levels of stereoscopic culture matrix were significantly higher than that of the ones from the other levels and ground cultivation. Steroscopic cultivation can effectively improves the utilization of space and yield, while the total content of polysaccharides and extractum were significantly higher than that of the ground cultivated ones. The significant difference in Dendrobium polysaccharides among the plants from different height of stereo- scopic culture matrix may be associated with light factor.

  1. 基于小波变换的雾霾立体图像增强算法研究%Wavelet transform stereoscopic images enhancement algorithms based on fog and haze

    Institute of Scientific and Technical Information of China (English)

    邱奕敏; 周毅

    2015-01-01

    This paper proposes a new image enhancement algorithm based on edge sharpening of wavelet coefficients for fog and haze stereoscopic images, using multi-scale characteristic of wavelet transform in order to improve the clarity of fog and haze stereoscopic images, which is mainly used in moderate pollution. The algorithm combines the depth of stereo-scopic images with multi-scale wavelet decomposition, setting a control factor in the high-frequency sub-graph to regulate contrast enhancement. And it highlights the overall outline through the sharpening of the low-frequency sub-graph. Experi-mental results show that whether PSNR or visual effect, or the subjective assessment of the DMOS value, the proposed method has better enhanced performance than the conventional edge sharpening and wavelet transform. And it has good image edge enhancement, details protection. Meanwhile, the proposed algorithm has the same computational complexity with wavelet transform.%针对立体图像在雾霾环境下的质量问题,运用小波变换的多尺度特征,提出了一种雾霾环境下的立体图像增强算法,主要用于中度污染情况下的雾霾立体图像,以提高图像资源的清晰程度。该算法将原始雾霾立体图像的深度信息与多尺度小波分解相结合,在不同尺度下分解得到的小波高频子图中设置人为操控因子,调控对比度增强的强度;锐化分解后的小波低频子图边缘来突出整体轮廓。实验从PSNR指标、视觉效果和DMOS主观评价值三个方面验证了算法的成效,该方法的增强性能均好于传统的边缘锐化和四层小波变换方法,具备很好的图像边缘增强能力,细节保护能力,且与传统小波变换有相同的算法时间复杂度。

  2. A Case of Functional (Psychogenic Monocular Hemianopia Analyzed by Measurement of Hemifield Visual Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Tsuyoshi Yoneda

    2013-12-01

    Full Text Available Purpose: Functional monocular hemianopia is an extremely rare condition, for which measurement of hemifield visual evoked potentials (VEPs has not been previously described. Methods: A 14-year-old boy with functional monocular hemianopia was followed up with Goldmann perimetry and measurement of hemifield and full-field VEPs. Results: The patient had a history of monocular temporal hemianopia of the right eye following headache, nausea and ague. There was no relative afferent pupillary defect, and a color perception test was normal. Goldmann perimetry revealed a vertical monocular temporal hemianopia of the right eye; the hemianopia on the right was also detected with a binocular visual field test. Computed tomography, magnetic resonance imaging (MRI and MR angiography of the brain including the optic chiasm as well as orbital MRI revealed no abnormalities. On the basis of these results, we diagnosed the patient's condition as functional monocular hemianopia. Pattern VEPs according to the International Society for Clinical Electrophysiology of Vision (ISCEV standard were within the normal range. The hemifield pattern VEPs for the right eye showed a symmetrical latency and amplitude for nasal and temporal hemifield stimulation. One month later, the visual field defect of the patient spontaneously disappeared. Conclusions: The latency and amplitude of hemifield VEPs for a patient with functional monocular hemianopia were normal. Measurement of hemifield VEPs may thus provide an objective tool for distinguishing functional hemianopia from hemifield loss caused by an organic lesion.

  3. Classifying EEG Signals during Stereoscopic Visualization to Estimate Visual Comfort.

    Science.gov (United States)

    Frey, Jérémy; Appriou, Aurélien; Lotte, Fabien; Hachet, Martin

    2016-01-01

    With stereoscopic displays a sensation of depth that is too strong could impede visual comfort and may result in fatigue or pain. We used Electroencephalography (EEG) to develop a novel brain-computer interface that monitors users' states in order to reduce visual strain. We present the first system that discriminates comfortable conditions from uncomfortable ones during stereoscopic vision using EEG. In particular, we show that either changes in event-related potentials' (ERPs) amplitudes or changes in EEG oscillations power following stereoscopic objects presentation can be used to estimate visual comfort. Our system reacts within 1 s to depth variations, achieving 63% accuracy on average (up to 76%) and 74% on average when 7 consecutive variations are measured (up to 93%). Performances are stable (≈62.5%) when a simplified signal processing is used to simulate online analyses or when the number of EEG channels is lessened. This study could lead to adaptive systems that automatically suit stereoscopic displays to users and viewing conditions. For example, it could be possible to match the stereoscopic effect with users' state by modifying the overlap of left and right images according to the classifier output.

  4. Shot Segmentation for Binocular Stereoscopic Video Based on Spatial-Temporal Feature Clustering

    Science.gov (United States)

    Duan, Feng-feng

    2016-12-01

    Shot segmentation is the key to content-based analysis, index and retrieval of binocular stereoscopic video. To solve the problem of low accuracy of stereoscopic video shot segmentation in which the segmentation method of 2D video is used to segment monocular video sequence, and the disadvantages of some stereoscopic video shot segmentation methods, a shot segmentation method for binocular stereoscopic video based on spatial-temporal feature clustering (STFC) is proposed. In the method, the features of color and brightness of left video frames in temporal domain as well as the depth feature acquired by matching of left and right frames in spatial domain is extracted. The feature differences between frames are calculated and quantified. Then the clustering of feature differences in three-dimensional space is executed, and the optimization and iteration of the classes are implemented to achieve the division of shot boundary. The experimental results show that the proposed method can effectively solve the problems of error and omission, especially the inaccuracy of smooth shot detection in binocular stereo video shot segmentation when compared with the latest existing algorithm. The higher accuracy of segmentation can be achieved.

  5. Case study: using a stereoscopic display for mission planning

    Science.gov (United States)

    Kleiber, Michael; Winkelholz, Carsten

    2009-02-01

    This paper reports on the results of a study investigating the benefits of using an autostereoscopic display in the training targeting process of the Germain Air Force. The study examined how stereoscopic 3D visualizations can help to improve flight path planning and the preparation of a mission in general. An autostereoscopic display was used because it allows the operator to perceive the stereoscopic images without shutter glasses which facilitates the integration into a workplace with conventional 2D monitors and arbitrary lighting conditions.

  6. Validation of Data Association for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-01-01

    Full Text Available Simultaneous Mapping and Localization (SLAM is a multidisciplinary problem with ramifications within several fields. One of the key aspects for its popularity and success is the data fusion produced by SLAM techniques, providing strong and robust sensory systems even with simple devices, such as webcams in Monocular SLAM. This work studies a novel batch validation algorithm, the highest order hypothesis compatibility test (HOHCT, against one of the most popular approaches, the JCCB. The HOHCT approach has been developed as a way to improve performance of the delayed inverse-depth initialization monocular SLAM, a previously developed monocular SLAM algorithm based on parallax estimation. Both HOHCT and JCCB are extensively tested and compared within a delayed inverse-depth initialization monocular SLAM framework, showing the strengths and costs of this proposal.

  7. A new combination of monocular and stereo cues for dense disparity estimation

    Science.gov (United States)

    Mao, Miao; Qin, Kaihuai

    2013-07-01

    Disparity estimation is a popular and important topic in computer vision and robotics. Stereo vision is commonly done to complete the task, but most existing methods fail in textureless regions and utilize numerical methods to interpolate into these regions. Monocular features are usually ignored, which may contain helpful depth information. We proposed a novel method combining monocular and stereo cues to compute dense disparities from a pair of images. The whole image regions are categorized into reliable regions (textured and unoccluded) and unreliable regions (textureless or occluded). Stable and accurate disparities can be gained at reliable regions. Then for unreliable regions, we utilize k-means to find the most similar reliable regions in terms of monocular cues. Our method is simple and effective. Experiments show that our method can generate a more accurate disparity map than existing methods from images with large textureless regions, e.g. snow, icebergs.

  8. Monocular indoor localization techniques for smartphones

    Directory of Open Access Journals (Sweden)

    Hollósi Gergely

    2016-12-01

    Full Text Available In the last decade huge research work has been put to the indoor visual localization of personal smartphones. Considering the available sensor capabilities monocular odometry provides promising solution, even reecting requirements of augmented reality applications. This paper is aimed to give an overview of state-of-the-art results regarding monocular visual localization. For this purpose essential basics of computer vision are presented and the most promising solutions are reviewed.

  9. Perception of stereoscopic direct gaze: The effects of interaxial distance and emotional facial expressions.

    Science.gov (United States)

    Hakala, Jussi; Kätsyri, Jari; Takala, Tapio; Häkkinen, Jukka

    2016-07-01

    Gaze perception has received considerable research attention due to its importance in social interaction. The majority of recent studies have utilized monoscopic pictorial gaze stimuli. However, a monoscopic direct gaze differs from a live or stereoscopic gaze. In the monoscopic condition, both eyes of the observer receive a direct gaze, whereas in live and stereoscopic conditions, only one eye receives a direct gaze. In the present study, we examined the implications of the difference between monoscopic and stereoscopic direct gaze. Moreover, because research has shown that stereoscopy affects the emotions elicited by facial expressions, and facial expressions affect the range of directions where an observer perceives mutual gaze-the cone of gaze-we studied the interaction effect of stereoscopy and facial expressions on gaze perception. Forty observers viewed stereoscopic images wherein one eye of the observer received a direct gaze while the other eye received a horizontally averted gaze at five different angles corresponding to five interaxial distances between the cameras in stimulus acquisition. In addition to monoscopic and stereoscopic conditions, the stimuli included neutral, angry, and happy facial expressions. The observers judged the gaze direction and mutual gaze of four lookers. Our results show that the mean of the directions received by the left and right eyes approximated the perceived gaze direction in the stereoscopic semidirect gaze condition. The probability of perceiving mutual gaze in the stereoscopic condition was substantially lower compared with monoscopic direct gaze. Furthermore, stereoscopic semidirect gaze significantly widened the cone of gaze for happy facial expressions.

  10. Image-guided localization accuracy of stereoscopic planar and volumetric imaging methods for stereotactic radiation surgery and stereotactic body radiation therapy: a phantom study.

    Science.gov (United States)

    Kim, Jinkoo; Jin, Jian-Yue; Walls, Nicole; Nurushev, Teamour; Movsas, Benjamin; Chetty, Indrin J; Ryu, Samuel

    2011-04-01

    To evaluate the positioning accuracies of two image-guided localization systems, ExacTrac and On-Board Imager (OBI), in a stereotactic treatment unit. An anthropomorphic pelvis phantom with eight internal metal markers (BBs) was used. The center of one BB was set as plan isocenter. The phantom was set up on a treatment table with various initial setup errors. Then, the errors were corrected using each of the investigated systems. The residual errors were measured with respect to the radiation isocenter using orthogonal portal images with field size 3 × 3 cm(2). The angular localization discrepancies of the two systems and the correction accuracy of the robotic couch were also studied. A pair of pre- and post-cone beam computed tomography (CBCT) images was acquired for each angular correction. Then, the correction errors were estimated by using the internal BBs through fiducial marker-based registrations. The isocenter localization errors (μ ±σ) in the left/right, posterior/anterior, and superior/inferior directions were, respectively, -0.2 ± 0.2 mm, -0.8 ± 0.2 mm, and -0.8 ± 0.4 mm for ExacTrac, and 0.5 ± 0.7 mm, 0.6 ± 0.5 mm, and 0.0 ± 0.5 mm for OBI CBCT. The registration angular discrepancy was 0.1 ± 0.2° between the two systems, and the maximum angle correction error of the robotic couch was 0.2° about all axes. Both the ExacTrac and the OBI CBCT systems showed approximately 1 mm isocenter localization accuracies. The angular discrepancy of two systems was minimal, and the robotic couch angle correction was accurate. These positioning uncertainties should be taken as a lower bound because the results were based on a rigid dosimetry phantom. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Stereoscopic highlighting: 2D graph visualization on stereo displays.

    Science.gov (United States)

    Alper, Basak; Höllerer, Tobias; Kuchera-Morin, JoAnn; Forbes, Angus

    2011-12-01

    In this paper we present a new technique and prototype graph visualization system, stereoscopic highlighting, to help answer accessibility and adjacency queries when interacting with a node-link diagram. Our technique utilizes stereoscopic depth to highlight regions of interest in a 2D graph by projecting these parts onto a plane closer to the viewpoint of the user. This technique aims to isolate and magnify specific portions of the graph that need to be explored in detail without resorting to other highlighting techniques like color or motion, which can then be reserved to encode other data attributes. This mechanism of stereoscopic highlighting also enables focus+context views by juxtaposing a detailed image of a region of interest with the overall graph, which is visualized at a further depth with correspondingly less detail. In order to validate our technique, we ran a controlled experiment with 16 subjects comparing static visual highlighting to stereoscopic highlighting on 2D and 3D graph layouts for a range of tasks. Our results show that while for most tasks the difference in performance between stereoscopic highlighting alone and static visual highlighting is not statistically significant, users performed better when both highlighting methods were used concurrently. In more complicated tasks, 3D layout with static visual highlighting outperformed 2D layouts with a single highlighting method. However, it did not outperform the 2D layout utilizing both highlighting techniques simultaneously. Based on these results, we conclude that stereoscopic highlighting is a promising technique that can significantly enhance graph visualizations for certain use cases.

  12. Monocular Video Guided Garment Simulation

    Institute of Scientific and Technical Information of China (English)

    Fa-Ming Li; Xiao-Wu Chen∗; Bin Zhou; Fei-Xiang Lu; Kan Guo; Qiang Fu

    2015-01-01

    We present a prototype to generate a garment-shape sequence guided by a monocular video sequence. It is a combination of a physically-based simulation and a boundary-based modification. Given a garment in the video worn on a mannequin, the simulation generates a garment initial shape by exploiting the mannequin shapes estimated from the video. The modification then deforms the simulated 3D shape into such a shape that matches the garment 2D boundary extracted from the video. According to the matching correspondences between the vertices on the shape and the points on the boundary, the modification is implemented by attracting the matched vertices and their neighboring vertices. For best-matching correspondences and efficient performance, three criteria are introduced to select the candidate vertices for matching. Since modifying each garment shape independently may cause inter-frame oscillations, changes by the modification are also propagated from one frame to the next frame. As a result, the generated garment 3D shape sequence is stable and similar to the garment video sequence. We demonstrate the effectiveness of our prototype with a number of examples.

  13. Stereoscopic applications for design visualization

    Science.gov (United States)

    Gilson, Kevin J.

    2007-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  14. Multispectral polarization viewing angle analysis of circular polarized stereoscopic 3D displays

    Science.gov (United States)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2010-02-01

    In this paper we propose a method to characterize polarization based stereoscopic 3D displays using multispectral Fourier optics viewing angle measurements. Full polarization analysis of the light emitted by the display in the full viewing cone is made at 31 wavelengths in the visible range. Vertical modulation of the polarization state is observed and explained by the position of the phase shift filter into the display structure. In addition, strong spectral dependence of the ellipticity and polarization degree is observed. These features come from the strong spectral dependence of the phase shift film and introduce some imperfections (color shifts and reduced contrast). Using the measured transmission properties of the two glasses filters, the resulting luminance across each filter is computed for left and right eye views. Monocular contrast for each eye and binocular contrasts are performed in the observer space, and Qualified Monocular and Binocular Viewing Spaces (QMVS and QBVS) can be deduced in the same way as auto-stereoscopic 3D displays allowing direct comparison of the performances.

  15. Automatic gear sorting system based on monocular vision

    Directory of Open Access Journals (Sweden)

    Wenqi Wu

    2015-11-01

    Full Text Available An automatic gear sorting system based on monocular vision is proposed in this paper. A CCD camera fixed on the top of the sorting system is used to obtain the images of the gears on the conveyor belt. The gears׳ features including number of holes, number of teeth and color are extracted, which is used to categorize the gears. Photoelectric sensors are used to locate the gears׳ position and produce the trigger signals for pneumatic cylinders. The automatic gear sorting is achieved by using pneumatic actuators to push different gears into their corresponding storage boxes. The experimental results verify the validity and reliability of the proposed method and system.

  16. Automatic stereoscopic system for person recognition

    Science.gov (United States)

    Murynin, Alexander B.; Matveev, Ivan A.; Kuznetsov, Victor D.

    1999-06-01

    A biometric access control system based on identification of human face is presented. The system developed performs remote measurements of the necessary face features. Two different scenarios of the system behavior are implemented. The first one assumes the verification of personal data entered by visitor from console using keyboard or card reader. The system functions as an automatic checkpoint, that strictly controls access of different visitors. The other scenario makes it possible to identify visitors without any person identifier or pass. Only person biometrics are used to identify the visitor. The recognition system automatically finds necessary identification information preliminary stored in the database. Two laboratory models of recognition system were developed. The models are designed to use different information types and sources. In addition to stereoscopic images inputted to computer from cameras the models can use voice data and some person physical characteristics such as person's height, measured by imaging system.

  17. Enhanced monocular visual odometry integrated with laser distance meter for astronaut navigation.

    Science.gov (United States)

    Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin

    2014-03-11

    Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method.

  18. High-Speed Generation of Illumination Spectra for a Stereoscopic Endoscope

    Science.gov (United States)

    Fritz, Eric

    2011-01-01

    Traditional stereoscopic vision (3D) is achieved through use of two separate cameras, arranged to emulate human eyes. This method works well on most projects, but becomes impractical on small scale designs, such as surgical endoscopes. This project is focused on developing a stereoscopic endoscope, using a single camera and Conjugated Multiple?Bandpass Filters (CMBF) to produce stereoscopic vision. Each half of filter is built to allow a distinct spectrum to pass through, while blocking the complimentary spectrum. A system with complimentary filters can produce stereoscopic images. To accomplish this, the light must be filtered at the source to match the filters at the camera. Additionally, the light source and camera must be synchronized in a way that each image will show only one filter spectrum. In this paper, I will describe the design and characterization of the prototype electro?optical system, including optical throughput measurements and video produced using this method.

  19. Development of a modular stereoscopic pre-visualisation and display framework

    Science.gov (United States)

    Kuchelmeister, Volker

    2011-03-01

    The increasing popularity for stereoscopic content in the entertainment industry and computer graphics applications and the availability of affordable capture and display systems is in contrast to the actual knowledge of underlying stereoscopic design principles and fundamental concepts. Content creators and educators inexperienced in stereoscopy require integrated, easy to use and flexible tools which can assist in the process of creating the three-dimensional "look" they are after within the limits of a comfortable viewing experience. The proposed framework in this paper, a custom stereoscopic export plug-in for the popular 3D modeling application Google Sketchup and a flexible stereoscopic format conversion and display engine, allows for stereoscopic previsualisation in near real-time in a format of choice. The user interface can recommend stereoscopic settings according to the scene, camera and display properties, calculates corresponding values according to manual entries but also leaves unrestricted control over all parameters. The display engine allows for different stereoscopic formats to be shown and saves the result in form of images with metadata for reference. Particular attention is put on usability, accessibility and tight integration.

  20. Monocular Road Detection Using Structured Random Forest

    Directory of Open Access Journals (Sweden)

    Liang Xiao

    2016-05-01

    Full Text Available Road detection is a key task for autonomous land vehicles. Monocular vision-based road detection algorithms are mostly based on machine learning approaches and are usually cast as classification problems. However, the pixel-wise classifiers are faced with the ambiguity caused by changes in road appearance, illumination and weather. An effective way to reduce the ambiguity is to model the contextual information with structured learning and prediction. Currently, the widely used structured prediction model in road detection is the Markov random field or conditional random field. However, the random field-based methods require additional complex optimization after pixel-wise classification, making them unsuitable for real-time applications. In this paper, we present a structured random forest-based road-detection algorithm which is capable of modelling the contextual information efficiently. By mapping the structured label space to a discrete label space, the test function of each split node can be trained in a similar way to that of the classical random forests. Structured random forests make use of the contextual information of image patches as well as the structural information of the labels to get more consistent results. Besides this benefit, by predicting a batch of pixels in a single classification, the structured random forest-based road detection can be much more efficient than the conventional pixel-wise random forest. Experimental results tested on the KITTI-ROAD dataset and data collected in typical unstructured environments show that structured random forest-based road detection outperforms the classical pixel-wise random forest both in accuracy and efficiency.

  1. Zero-disparity adjustment of multiview stereoscopic images based on SIFT matching%基于SIFT匹配的多视点立体图像零视差调整

    Institute of Scientific and Technical Information of China (English)

    李实秋; 雷建军; 周志远; 张海龙; 范晓红

    2015-01-01

    A zero-disparity adjustment method based on SIFT matching was proposed for multiview stereoscopic images using in autostereoscopic display system. First, SIFT was introduced for pixel matching between adjacent views. Then, the result of SIFT matching was filtered by saliency mask which was extracted using frequency-tuned saliency model, and the key-point of disparity control was selected. Finally, the disparity between the neighboring views was computed based on SIFT matching points, and zero-disparity adjustment was conducted based on the principle of disparity control. The disparity of selected key-point was adjusted to zero. Experimental results demonstrate that the proposed method can effectively adjust the disparity of multiview stereoscopic images and generate vivid and comfortable 3D scenes for autostereoscopic display.%针对多视点自由立体显示系统,提出一种基于SIFT匹配的平行多视点立体图像零视差调整方法。首先,通过SIFT变换提取图像特征关键点、对特征关键点进行匹配,获得精确匹配点。然后,运用频率调和显著性模型计算场景的视觉显著性、提取显著性掩膜,以筛选匹配点作为零视差调整的关键点。最后,依据SIFT匹配点计算视点间视差值,并基于视差调整原理对多视点立体图像进行零视差调整。实验结果表明,所提出的方法适合多视点自由立体显示系统,零视差调整后的多视点立体图像具有良好的自由立体显示效果。

  2. Monocular camera and IMU integration for indoor position estimation.

    Science.gov (United States)

    Zhang, Yinlong; Tan, Jindong; Zeng, Ziming; Liang, Wei; Xia, Ye

    2014-01-01

    This paper presents a monocular camera (MC) and inertial measurement unit (IMU) integrated approach for indoor position estimation. Unlike the traditional estimation methods, we fix the monocular camera downward to the floor and collect successive frames where textures are orderly distributed and feature points robustly detected, rather than using forward oriented camera in sampling unknown and disordered scenes with pre-determined frame rate and auto-focus metric scale. Meanwhile, camera adopts the constant metric scale and adaptive frame rate determined by IMU data. Furthermore, the corresponding distinctive image feature point matching approaches are employed for visual localizing, i.e., optical flow for fast motion mode; Canny Edge Detector & Harris Feature Point Detector & Sift Descriptor for slow motion mode. For superfast motion and abrupt rotation where images from camera are blurred and unusable, the Extended Kalman Filter is exploited to estimate IMU outputs and to derive the corresponding trajectory. Experimental results validate that our proposed method is effective and accurate in indoor positioning. Since our system is computationally efficient and in compact size, it's well suited for visually impaired people indoor navigation and wheelchaired people indoor localization.

  3. Stereoscopic Visualization of Plasma Simulation Data

    Science.gov (United States)

    Jones, Samuel; Cardenas, Rosa; Kim, Charlson; Parker, Scott

    2000-10-01

    Large-scale three-dimensional simulation of realistic fusion and space plasmas generates massive amounts of raw numerical data. Scientific visualization is an important tool in the analysis of this data. Stereoscopic projection is a visualization technique allowing data to be presented spacialy with visual separation clues to indicate the relative depth of the data. This allows researchers to be able to see three-dimensional structures that are not easily shown in purely two-dimensional representations. We have implemented a low cost stereo projection system running from a linux based intel cluster. This system is used to display images created with the visualization package IBM Open Data Explorer (Open-DX). We will present results of our use of this technology in the study of various plasma phenomenon including the complex spacial nature of magnetic fields embedded in simulated spheromak plasma.

  4. Stereoscopic Optical Signal Processor

    Science.gov (United States)

    Graig, Glenn D.

    1988-01-01

    Optical signal processor produces two-dimensional cross correlation of images from steroscopic video camera in real time. Cross correlation used to identify object, determines distance, or measures movement. Left and right cameras modulate beams from light source for correlation in video detector. Switch in position 1 produces information about range of object viewed by cameras. Position 2 gives information about movement. Position 3 helps to identify object.

  5. Binocular and Monocular Depth Cues in Online Feedback Control of 3-D Pointing Movement

    Science.gov (United States)

    Hu, Bo; Knill, David C.

    2012-01-01

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and thus were available in an observer’s retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account of the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions. PMID:21724567

  6. Binocular and monocular depth cues in online feedback control of 3D pointing movement.

    Science.gov (United States)

    Hu, Bo; Knill, David C

    2011-06-30

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and, thus, were available in an observer's retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size, and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions.

  7. Monocular Blindness: Is It a Handicap?

    Science.gov (United States)

    Knoth, Sharon

    1995-01-01

    Students with monocular vision may be in need of special assistance and should be evaluated by a multidisciplinary team to determine whether the visual loss is affecting educational performance. This article discusses the student's eligibility for special services, difficulty in performing depth perception tasks, difficulties in specific classroom…

  8. Disparity biasing in depth from monocular occlusions.

    Science.gov (United States)

    Tsirlin, Inna; Wilcox, Laurie M; Allison, Robert S

    2011-07-15

    Monocular occlusions have been shown to play an important role in stereopsis. Among other contributions to binocular depth perception, monocular occlusions can create percepts of illusory occluding surfaces. It has been argued that the precise location in depth of these illusory occluders is based on the constraints imposed by occlusion geometry. Tsirlin et al. (2010) proposed that when these constraints are weak, the depth of the illusory occluder can be biased by a neighboring disparity-defined feature. In the present work we test this hypothesis using a variety of stimuli. We show that when monocular occlusions provide only partial constraints on the magnitude of depth of the illusory occluders, the perceived depth of the occluders can be biased by disparity-defined features in the direction unrestricted by the occlusion geometry. Using this disparity bias phenomenon we also show that in illusory occluder stimuli where disparity information is present, but weak, most observers rely on disparity while some use occlusion information instead to specify the depth of the illusory occluder. Taken together our experiments demonstrate that in binocular depth perception disparity and monocular occlusion cues interact in complex ways to resolve perceptual ambiguity. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, J.J.; Albertazzi, L.; Doorn, A.J. van; Ee, R. van; Grind, W.A. van de; Kappers, A.M.L.; Lappin, J.S.; Norman, J.F.; Oomes, A.H.J.; Pas, S.F. te; Phillips, F.; Pont, S.C.; Richards, W.A.; Todd, J.T.; Verstraten, F.A.J.; Vries, S.C. de

    2010-01-01

    The issue of the existence of planes—understood as the carriers of a nexus of straight lines—in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  10. Monocular deprivation of Fourier phase information boosts the deprived eye's dominance during interocular competition but not interocular phase combination.

    Science.gov (United States)

    Bai, Jianying; Dong, Xue; He, Sheng; Bao, Min

    2017-06-03

    Ocular dominance has been extensively studied, often with the goal to understand neuroplasticity, which is a key characteristic within the critical period. Recent work on monocular deprivation, however, demonstrates residual neuroplasticity in the adult visual cortex. After deprivation of patterned inputs by monocular patching, the patched eye becomes more dominant. Since patching blocks both the Fourier amplitude and phase information of the input image, it remains unclear whether deprivation of the Fourier phase information alone is able to reshape eye dominance. Here, for the first time, we show that removing of the phase regularity without changing the amplitude spectra of the input image induced a shift of eye dominance toward the deprived eye, but only if the eye dominance was measured with a binocular rivalry task rather than an interocular phase combination task. These different results indicate that the two measurements are supported by different mechanisms. Phase integration requires the fusion of monocular images. The fused percept highly relies on the weights of the phase-sensitive monocular neurons that respond to the two monocular images. However, binocular rivalry reflects the result of direct interocular competition that strongly weights the contour information transmitted along each monocular pathway. Monocular phase deprivation may not change the weights in the integration (fusion) mechanism much, but alters the balance in the rivalry (competition) mechanism. Our work suggests that ocular dominance plasticity may occur at different stages of visual processing, and that homeostatic compensation also occurs for the lack of phase regularity in natural scenes. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  11. Visibility of monocular symbology in transparent head-mounted display applications

    Science.gov (United States)

    Winterbottom, M.; Patterson, R.; Pierce, B.; Gaska, J.; Hadley, S.

    2015-05-01

    With increased reliance on head-mounted displays (HMDs), such as the Joint Helmet Mounted Cueing System and F-35 Helmet Mounted Display System, research concerning visual performance has also increased in importance. Although monocular HMDs have been used successfully for many years, a number of authors have reported significant problems with their use. Certain problems have been attributed to binocular rivalry when differing imagery is presented to the two eyes. With binocular rivalry, the visibility of the images in the two eyes fluctuates, with one eye's view becoming dominant, and thus visible, while the other eye's view is suppressed, which alternates over time. Rivalry is almost certainly created when viewing an occluding monocular HMD. For semi-transparent monocular HMDs, however, much of the scene is binocularly fused, with additional imagery superimposed in one eye. Binocular fusion is thought to prevent rivalry. The present study was designed to investigate differences in visibility between monocularly and binocularly presented symbology at varying levels of contrast and while viewing simulated flight over terrain at various speeds. Visibility was estimated by measuring the presentation time required to identify a test probe (tumbling E) embedded within other static symbology. Results indicated that there were large individual differences, but that performance decreased with decreased test probe contrast under monocular viewing relative to binocular viewing conditions. Rivalry suppression may reduce visibility of semi-transparent monocular HMD imagery. However, factors, such as contrast sensitivity, masking, and conditions such as monofixation, will be important to examine in future research concerning visibility of HMD imagery.

  12. Virtual and stereoscopic anatomy: when virtual reality meets medical education.

    Science.gov (United States)

    de Faria, Jose Weber Vieira; Teixeira, Manoel Jacobsen; de Moura Sousa Júnior, Leonardo; Otoch, Jose Pinhata; Figueiredo, Eberval Gadelha

    2016-11-01

    OBJECTIVE The authors sought to construct, implement, and evaluate an interactive and stereoscopic resource for teaching neuroanatomy, accessible from personal computers. METHODS Forty fresh brains (80 hemispheres) were dissected. Images of areas of interest were captured using a manual turntable and processed and stored in a 5337-image database. Pedagogic evaluation was performed in 84 graduate medical students, divided into 3 groups: 1 (conventional method), 2 (interactive nonstereoscopic), and 3 (interactive and stereoscopic). The method was evaluated through a written theory test and a lab practicum. RESULTS Groups 2 and 3 showed the highest mean scores in pedagogic evaluations and differed significantly from Group 1 (p 0.05). Size effects, measured as differences in scores before and after lectures, indicate the effectiveness of the method. ANOVA results showed significant difference (p teaching resources.

  13. Recovery of neurofilament following early monocular deprivation

    Directory of Open Access Journals (Sweden)

    Timothy P O'Leary

    2012-04-01

    Full Text Available A brief period of monocular deprivation in early postnatal life can alter the structure of neurons within deprived-eye-receiving layers of the dorsal lateral geniculate nucleus. The modification of structure is accompanied by a marked reduction in labeling for neurofilament, a protein that composes the stable cytoskeleton and that supports neuron structure. This study examined the extent of neurofilament recovery in monocularly deprived cats that either had their deprived eye opened (binocular recovery, or had the deprivation reversed to the fellow eye (reverse occlusion. The degree to which recovery was dependent on visually-driven activity was examined by placing monocularly deprived animals in complete darkness (dark rearing. The loss of neurofilament and the reduction of soma size caused by monocular deprivation were both ameliorated equally following either binocular recovery or reverse occlusion for 8 days. Though monocularly deprived animals placed in complete darkness showed recovery of soma size, there was a generalized loss of neurofilament labeling that extended to originally non-deprived layers. Overall, these results indicate that recovery of soma size is achieved by removal of the competitive disadvantage of the deprived eye, and occurred even in the absence of visually-driven activity. Recovery of neurofilament occurred when the competitive disadvantage of the deprived eye was removed, but unlike the recovery of soma size, was dependent upon visually-driven activity. The role of neurofilament in providing stable neural structure raises the intriguing possibility that dark rearing, which reduced overall neurofilament levels, could be used to reset the deprived visual system so as to make it more ameliorable with treatment by experiential manipulations.

  14. SU-E-J-39: Comparison of PTV Margins Determined by In-Room Stereoscopic Image Guidance and by On-Board Cone Beam Computed Tomography Technique for Brain Radiotherapy Patients

    Energy Technology Data Exchange (ETDEWEB)

    Ganesh, T; Paul, S; Munshi, A; Sarkar, B; Krishnankutty, S; Sathya, J; George, S; Jassal, K; Roy, S; Mohanti, B [Fortis Memorial Research Institute, Gurgaon (India)

    2014-06-01

    Purpose: Stereoscopic in room kV image guidance is a faster tool in daily monitoring of patient positioning. Our centre, for the first time in the world, has integrated such a solution from BrainLAB (ExacTrac) with Elekta's volumetric cone beam computed tomography (XVI). Using van Herk's formula, we compared the planning target volume (PTV) margins calculated by both these systems for patients treated with brain radiotherapy. Methods: For a total of 24 patients who received partial or whole brain radiotherapy, verification images were acquired for 524 treatment sessions by XVI and for 334 sessions by ExacTrac out of the total 547 sessions. Systematic and random errors were calculated in cranio-caudal, lateral and antero-posterior directions for both techniques. PTV margins were then determined using van Herk formula. Results: In the cranio-caudal direction, systematic error, random error and the calculated PTV margin were found to be 0.13 cm, 0.12 cm and 0.41 cm with XVI and 0.14 cm, 0.13 cm and 0.44 cm with ExacTrac. The corresponding values in lateral direction were 0.13 cm 0.1 cm and 0.4 cm with XVI and 0.13 cm, 0.12 cm and 0.42 cm with ExacTrac imaging. The same parameters for antero-posterior were for 0.1 cm, 0.11 cm and 0.34 cm with XVI and 0.13 cm, 0.16 cm and 0.43 cm with ExacTrac imaging. The margins estimated with the two imaging modalities were comparable within ± 1 mm limit. Conclusion: Verification of setup errors in the major axes by two independent imaging systems showed the results are comparable and within ± 1 mm. This implies that planar imaging based ExacTrac can yield equal accuracy in setup error determination as the time consuming volumetric imaging which is considered as the gold standard. Accordingly PTV margins estimated by this faster imaging technique can be confidently used in clinical setup.

  15. Stereoscopic displays and applications; Proceedings of the Meeting, Santa Clara, CA, Feb. 12-14, 1990

    Science.gov (United States)

    Merritt, John O. (Editor); Fisher, Scott S. (Editor)

    1990-01-01

    The present conference discusses topics in the fields of stereoscopic displays' user interfaces, three-dimensional (TD) visualization, novel TD displays, and applications of stereoscopic displays. Attention is given to TD cockpit displays, novel computational control techniques for stereo TD displays, characterization of higher-dimensional presentation techniques, volume visualization on a stereoscopic display, and stereoscopic displays for terrain-data base visualization. Also discussed are the experimental design of cyberspaces, a volumetric environment for interactive design of three-dimensional objects, videotape recording of TD TV images, remote manipulator tasks rendered possible by stereo TV, TD endoscopy based on alternating-frame technology, and advancements in computer-generated barrier-strip autostereography.

  16. Controllable liquid crystal gratings for an adaptive 2D/3D auto-stereoscopic display

    Science.gov (United States)

    Zhang, Y. A.; Jin, T.; He, L. C.; Chu, Z. H.; Guo, T. L.; Zhou, X. T.; Lin, Z. X.

    2017-02-01

    2D/3D switchable, viewpoint controllable and 2D/3D localizable auto-stereoscopic displays based on controllable liquid crystal gratings are proposed in this work. Using the dual-layer staggered structure on the top substrate and bottom substrate as driven electrodes within a liquid crystal cell, the ratio between transmitting region and shielding region can be selectively controlled by the corresponding driving circuit, which indicates that 2D/3D switch and 3D video sources with different disparity images can reveal in the same auto-stereoscopic display system. Furthermore, the controlled region in the liquid crystal gratings presents 3D model while other regions maintain 2D model in the same auto-stereoscopic display by the corresponding driving circuit. This work demonstrates that the controllable liquid crystal gratings have potential applications in the field of auto-stereoscopic display.

  17. Analysis of brain activity and response during monoscopic and stereoscopic visualization

    Science.gov (United States)

    Calore, Enrico; Folgieri, Raffaella; Gadia, Davide; Marini, Daniele

    2012-03-01

    Stereoscopic visualization in cinematography and Virtual Reality (VR) creates an illusion of depth by means of two bidimensional images corresponding to different views of a scene. This perceptual trick is used to enhance the emotional response and the sense of presence and immersivity of the observers. An interesting question is if and how it is possible to measure and analyze the level of emotional involvement and attention of the observers during a stereoscopic visualization of a movie or of a virtual environment. The research aims represent a challenge, due to the large number of sensorial, physiological and cognitive stimuli involved. In this paper we begin this research by analyzing possible differences in the brain activity of subjects during the viewing of monoscopic or stereoscopic contents. To this aim, we have performed some preliminary experiments collecting electroencephalographic (EEG) data of a group of users using a Brain- Computer Interface (BCI) during the viewing of stereoscopic and monoscopic short movies in a VR immersive installation.

  18. Surface formation and depth in monocular scene perception.

    Science.gov (United States)

    Albert, M K

    1999-01-01

    The visual perception of monocular stimuli perceived as 3-D objects has received considerable attention from researchers in human and machine vision. However, most previous research has focused on how individual 3-D objects are perceived. Here this is extended to a study of how the structure of 3-D scenes containing multiple, possibly disconnected objects and features is perceived. Da Vinci stereopsis, stereo capture, and other surface formation and interpolation phenomena in stereopsis and structure-from-motion suggest that small features having ambiguous depth may be assigned depth by interpolation with features having unambiguous depth. I investigated whether vision may use similar mechanisms to assign relative depth to multiple objects and features in sparse monocular images, such as line drawings, especially when other depth cues are absent. I propose that vision tends to organize disconnected objects and features into common surfaces to construct 3-D-scene interpretations. Interpolations that are too weak to generate a visible surface percept may still be strong enough to assign relative depth to objects within a scene. When there exists more than one possible surface interpolation in a scene, the visual system's preference for one interpolation over another seems to be influenced by a number of factors, including: (i) proximity, (ii) smoothness, (iii) a preference for roughly frontoparallel surfaces and 'ground' surfaces, (iv) attention and fixation, and (v) higher-level factors. I present a variety of demonstrations and an experiment to support this surface-formation hypothesis.

  19. Stereoscopic medical data video quality issues.

    Science.gov (United States)

    Patrona, Foteini; Mademlis, Ioannis; Kalaganis, Fotios; Pitas, Ioannis; Lyroudia, Kleoniki

    2016-04-01

    Stereoscopic medical videos are recorded, e.g., in stereo endoscopy or during video recording medical/dental operations. This paper examines quality issues in the recorded stereoscopic medical videos, as insufficient quality may induce visual fatigue to doctors. No attention has been paid to stereo quality and ensuing fatigue issues in the scientific literature so far. Two of the most commonly encountered quality issues in stereoscopic data, namely stereoscopic window violations and bent windows, were searched for in stereo endoscopic medical videos. Furthermore, an additional stereo quality issue encountered in dental operation videos, namely excessive disparity, was detected and fixed. The conducted experiments prove the existence of such quality issues in stereoscopic medical data and highlight the need for their detection and correction.

  20. Monocular and binocular depth discrimination thresholds.

    Science.gov (United States)

    Kaye, S B; Siddiqui, A; Ward, A; Noonan, C; Fisher, A C; Green, J R; Brown, M C; Wareing, P A; Watt, P

    1999-11-01

    Measurement of stereoacuity at varying distances, by real or simulated depth stereoacuity tests, is helpful in the evaluation of patients with binocular imbalance or strabismus. Although the cue of binocular disparity underpins stereoacuity tests, there may be variable amounts of other binocular and monocular cues inherent in a stereoacuity test. In such circumstances, a combined monocular and binocular threshold of depth discrimination may be measured--stereoacuity conventionally referring to the situation where binocular disparity giving rise to retinal disparity is the only cue present. A child-friendly variable distance stereoacuity test (VDS) was developed, with a method for determining the binocular depth threshold from the combined monocular and binocular threshold of depth of discrimination (CT). Subjects with normal binocular function, reduced binocular function, and apparently absent binocularity were included. To measure the threshold of depth discrimination, subjects were required by means of a hand control to align two electronically controlled spheres at viewing distances of 1, 3, and 6m. Stereoacuity was also measured using the TNO, Frisby, and Titmus stereoacuity tests. BTs were calculated according to the function BT= arctan (1/tan alphaC - 1/tan alphaM)(-1), where alphaC and alphaM are the angles subtended at the nodal points by objects situated at the monocular threshold (alphaM) and the combined monocular-binocular threshold (alphaC) of discrimination. In subjects with good binocularity, BTs were similar to their combined thresholds, whereas subjects with reduced and apparently absent binocularity had binocular thresholds 4 and 10 times higher than their combined thresholds (CT). The VDS binocular thresholds showed significantly higher correlation and agreement with the TNO test and the binocular thresholds of the Frisby and Titmus tests, than the corresponding combined thresholds (p = 0.0019). The VDS was found to be an easy to use real depth

  1. Novel approach for mobile robot localization using monocular vision

    Science.gov (United States)

    Zhong, Zhiguang; Yi, Jianqiang; Zhao, Dongbin; Hong, Yiping

    2003-09-01

    This paper presents a novel approach for mobile robot localization using monocular vision. The proposed approach locates a robot relative to the target to which the robot moves. Two points are selected from the target as two feature points. Once the coordinates in an image of the two feature points are detected, the position and motion direction of the robot can be determined according to the detected coordinates. Unlike those reported geometry pose estimation or landmarks matching methods, this approach requires neither artificial landmarks nor an accurate map of indoor environment. It needs less computation and can simplify greatly the localization problem. The validity and flexibility of the proposed approach is demonstrated by experiments performed on real images. The results show that this new approach is not only simple and flexible but also has high localization precision.

  2. Markerless monocular tracking system for guided external eye surgery.

    Science.gov (United States)

    Monserrat, C; Rupérez, M J; Alcañiz, M; Mataix, J

    2014-12-01

    This paper presents a novel markerless monocular tracking system aimed at guiding ophthalmologists during external eye surgery. This new tracking system performs a very accurate tracking of the eye by detecting invariant points using only textures that are present in the sclera, i.e., without using traditional features like the pupil and/or cornea reflections, which remain partially or totally occluded in most surgeries. Two known algorithms that compute invariant points and correspondences between pairs of images were implemented in our system: Scalable Invariant Feature Transforms (SIFT) and Speed Up Robust Features (SURF). The results of experiments performed on phantom eyes show that, with either algorithm, the developed system tracks a sphere at a 360° rotation angle with an error that is lower than 0.5%. Some experiments have also been carried out on images of real eyes showing promising behavior of the system in the presence of blood or surgical instruments during real eye surgery.

  3. High-Definition 3D Stereoscopic Microscope Display System for Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Yoo Kwan-Hee

    2010-01-01

    Full Text Available Biomedical research has been performed by using advanced information techniques, and micro-high-quality stereo images have been used by researchers and/or doctors for various aims in biomedical research and surgery. To visualize the stereo images, many related devices have been developed. However, the devices are difficult to learn for junior doctors and demanding to supervise for experienced surgeons. In this paper, we describe the development of a high-definition (HD three-dimensional (3D stereoscopic imaging display system for operating a microscope or experimenting on animals. The system consists of a stereoscopic camera part, image processing device for stereoscopic video recording, and stereoscopic display. In order to reduce eyestrain and viewer fatigue, we use a preexisting stereo microscope structure and polarized-light stereoscopic display method that does not reduce the quality of the stereo images. The developed system can overcome the discomfort of the eye piece and eyestrain caused by use over a long period of time.

  4. Mastcam-Z: Designing a Geologic, Stereoscopic, and Multispectral Pair of Zoom Cameras for the NASA Mars 2020 Rover

    Science.gov (United States)

    Bell, J. F.; Maki, J. N.; Mehall, G. L.; Ravine, M. A.; Caplinger, M. A.; Mastcam-Z Team

    2016-10-01

    Mastcam-Z is a stereoscopic, multispectral imaging investigation selected for flight on the Mars 2020 rover mission. In this presentation we review our science goals and requirements and describe our CDR-level design and operational plans.

  5. Quantitative perceived depth from sequential monocular decamouflage.

    Science.gov (United States)

    Brooks, K R; Gillam, B J

    2006-03-01

    We present a novel binocular stimulus without conventional disparity cues whose presence and depth are revealed by sequential monocular stimulation (delay > or = 80 ms). Vertical white lines were occluded as they passed behind an otherwise camouflaged black rectangular target. The location (and instant) of the occlusion event, decamouflaging the target's edges, differed in the two eyes. Probe settings to match the depth of the black rectangular target showed a monotonic increase with simulated depth. Control tests discounted the possibility of subjects integrating retinal disparities over an extended temporal window or using temporal disparity. Sequential monocular decamouflage was found to be as precise and accurate as conventional simultaneous stereopsis with equivalent depths and exposure durations.

  6. Outdoor autonomous navigation using monocular vision

    OpenAIRE

    Royer, Eric; Bom, Jonathan; Dhome, Michel; Thuilot, Benoît; Lhuillier, Maxime; Marmoiton, Francois

    2005-01-01

    International audience; In this paper, a complete system for outdoor robot navigation is presented. It uses only monocular vision. The robot is first guided on a path by a human. During this learning step, the robot records a video sequence. From this sequence, a three dimensional map of the trajectory and the environment is built. When this map has been computed, the robot is able to follow the same trajectory by itself. Experimental results carried out with an urban electric vehicle are sho...

  7. Monocular alignment in different depth planes.

    Science.gov (United States)

    Shimono, Koichi; Wade, Nicholas J

    2002-04-01

    We examined (a) whether vertical lines at different physical horizontal positions in the same eye can appear to be aligned, and (b), if so, whether the difference between the horizontal positions of the aligned vertical lines can vary with the perceived depth between them. In two experiments, each of two vertical monocular lines was presented (in its respective rectangular area) in one field of a random-dot stereopair with binocular disparity. In Experiment 1, 15 observers were asked to align a line in an upper area with a line in a lower area. The results indicated that when the lines appeared aligned, their horizontal physical positions could differ and the direction of the difference coincided with the type of disparity of the rectangular areas; this is not consistent with the law of the visual direction of monocular stimuli. In Experiment 2, 11 observers were asked to report relative depth between the two lines and to align them. The results indicated that the difference of the horizontal position did not covary with their perceived relative depth, suggesting that the visual direction and perceived depth of the monocular line are mediated via different mechanisms.

  8. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    Science.gov (United States)

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  9. Taking space literally: reconceptualizing the effects of stereoscopic representation on user experience

    Directory of Open Access Journals (Sweden)

    Benny Liebold

    2013-03-01

    Full Text Available Recently, cinemas, home theater systems and game consoles have undergone a rapid evolution towards stereoscopic representation with recipients gradually becoming accustomed to these changes. Stereoscopy techniques in most media present two offset images separately to the left and right eye of the viewer (usually with the help of glasses separating both images resulting in the perception of three-dimensional depth. In contrast to these mass market techniques, true 3D volumetric displays or holograms that display an image in three full dimensions are relatively uncommon. The visual quality and visual comfort of stereoscopic representation is constantly being improved by the industry.

  10. Clinical evaluation of stereoscopic DSA for vascular lesions

    OpenAIRE

    大川,元臣; 児島, 完治; 影山,淳一; 日野, 一郎; 高島, 均; 玉井,豊理; 田邉,正忠; 大本, 尭史; 植田, 清隆; 藤原, 敬

    1989-01-01

    Seventy-one series of stereoscopic DSA utilized on thirty-nine patients with intracranial vascular lesions were evaluated by comparison with subtracted magnified angiograms or independently. All stereoscopic series had good or fairly good stereoscopic quality. Stereoscopic DSA was useful in the preoperative stereoscopic vascular analysis of vascular lesions such as aneurysms, arteriovenous malformations, cartotid-cavernous fistulas, obstructive or stenotic vascular lesions and vascular elonga...

  11. The effects of stereoscopic depth on completion.

    Science.gov (United States)

    Takeichi, H

    1999-01-01

    Stereoscopic depth has a critical effect on completion of partially occluded figures. However, it has not strictly been distinguished whether the effect is direct or indirect through alteration of contour segmentation or parsing. Here, I report that stereoscopic depth does not influence completion of partially occluded figures when parsing is unambiguous from motion cues. This is consistent with the present proposal that stereoscopic depth does not have a unique role in completion and that it is one of the cues to contour segmentation or parsing, which in turn influences completion and surface representation, like motion, shape, or transparency.

  12. Depth Perception In Remote Stereoscopic Viewing Systems

    Science.gov (United States)

    Diner, Daniel B.; Von Sydow, Marika

    1989-01-01

    Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.

  13. Monocular Obstacle Detection for Real-World Environments

    Science.gov (United States)

    Einhorn, Erik; Schroeter, Christof; Gross, Horst-Michael

    In this paper, we present a feature based approach for monocular scene reconstruction based on extended Kaiman filters (EKF). Our method processes a sequence of images taken by a single camera mounted in front of a mobile robot. Using various techniques we are able to produce a precise reconstruction that is almost free from outliers and therefore can be used for reliable obstacle detection and avoidance. In real-world field tests we show that the presented approach is able to detect obstacles that can not be seen by other sensors, such as laser range finders. Furthermore, we show that visual obstacle detection combined with a laser range finder can increase the detection rate of obstacles considerably, allowing the autonomous use of mobile robots in complex public and home environments.

  14. Field-Sequential Electronic Stereoscopic Projector

    Science.gov (United States)

    Lipton, Lenny

    1989-07-01

    Culminating a research and development project spanning many years, StereoGraphics Corporation has succeeded in bringing to market the first field-sequential electronic stereoscopic projector. The product is based on a modification of Electrohome and Barco projectors. Our design goal was to produce a projector capable of displaying an image on a six-foot (or larger) diagonal screen for an audience of 50 or 60 people, or for an individual using a simulator. A second goal was to produce an image that required only passive polarizing glasses rather than powered, tethered visors. Two major design challenges posed themselves. First, it was necessary to create an electro-optical modulator which could switch the characteristic of polarized light at field rate, and second, it was necessary to produce a bright green CRT with short persistence to prevent crosstalk between left and right fields. To solve the first problem, development was undertaken to produce the required electro-optical modulator. The second problem was solved with the help of a vendor specializing in high performance CRT's.

  15. Monocular 3D scene reconstruction at absolute scale

    Science.gov (United States)

    Wöhler, Christian; d'Angelo, Pablo; Krüger, Lars; Kuhl, Annika; Groß, Horst-Michael

    In this article we propose a method for combining geometric and real-aperture methods for monocular three-dimensional (3D) reconstruction of static scenes at absolute scale. Our algorithm relies on a sequence of images of the object acquired by a monocular camera of fixed focal setting from different viewpoints. Object features are tracked over a range of distances from the camera with a small depth of field, leading to a varying degree of defocus for each feature. Information on absolute depth is obtained based on a Depth-from-Defocus approach. The parameters of the point spread functions estimated by Depth-from-Defocus are used as a regularisation term for Structure-from-Motion. The reprojection error obtained from bundle adjustment and the absolute depth error obtained from Depth-from-Defocus are simultaneously minimised for all tracked object features. The proposed method yields absolutely scaled 3D coordinates of the scene points without any prior knowledge about scene structure and camera motion. We describe the implementation of the proposed method both as an offline and as an online algorithm. Evaluating the algorithm on real-world data, we demonstrate that it yields typical relative scale errors of a few per cent. We examine the influence of random effects, i.e. the noise of the pixel grey values, and systematic effects, caused by thermal expansion of the optical system or by inclusion of strongly blurred images, on the accuracy of the 3D reconstruction result. Possible applications of our approach are in the field of industrial quality inspection; in particular, it is preferable to stereo cameras in industrial vision systems with space limitations or where strong vibrations occur.

  16. Remote non-invasive stereoscopic imaging of blood vessels: first in-vivo results of a new multispectral contrast enhancement technology

    NARCIS (Netherlands)

    Wieringa, F.P.; Mastik, F.; Cate, F.J. ten; Neumann, H.A.M.; Steen, A.F.W. van der

    2006-01-01

    We describe a contactless optical technique selectively enhancing superficial blood vessels below variously pigmented intact human skin by combining images in different spectral bands. Two CMOS-cameras, with apochromatic lenses and dual-band LED-arrays, simultaneously streamed Left (L) and Right (R)

  17. Remote non-invasive stereoscopic imaging of blood vessels: first in-vivo results of a new multispectral contrast enhancement technology

    NARCIS (Netherlands)

    Wieringa, F.P.; Mastik, F.; Cate, F.J. ten; Neumann, H.A.M.; Steen, A.F.W. van der

    2006-01-01

    We describe a contactless optical technique selectively enhancing superficial blood vessels below variously pigmented intact human skin by combining images in different spectral bands. Two CMOS-cameras, with apochromatic lenses and dual-band LED-arrays, simultaneously streamed Left (L) and Right (R)

  18. Multimodal Stereoscopic Movie Summarization Conforming to Narrative Characteristics.

    Science.gov (United States)

    Mademlis, Ioannis; Tefas, Anastasios; Nikolaidis, Nikos; Pitas, Ioannis

    2016-10-05

    Video summarization is a timely and rapidly developing research field with broad commercial interest, due to the increasing availability of massive video data. Relevant algorithms face the challenge of needing to achieve a careful balance between summary compactness, enjoyability and content coverage. The specific case of stereoscopic 3D theatrical films has become more important over the past years, but not received corresponding research attention. In the present work, a multi-stage, multimodal summarization process for such stereoscopic movies is proposed, that is able to extract a short, representative video skim conforming to narrative characteristics from a 3D film. At the initial stage, a novel, low-level video frame description method is introduced (Frame Moments Descriptor, or FMoD), that compactly captures informative image statistics from luminance, color, optical flow and stereoscopic disparity video data, both in a global and in a local scale. Thus, scene texture, illumination, motion and geometry properties may succinctly be contained within a single frame feature descriptor, which can subsequently be employed as a building block in any key-frame extraction scheme, e.g., for intra-shot frame clustering. The computed key-frames are then used to construct a movie summary in the form of a video skim, which is post-processed in a manner that also takes into account the audio modality. The next stage of the proposed summarization pipeline essentially performs shot pruning, controlled by a userprovided shot retention parameter, that removes segments from the skim based on the narrative prominence of movie characters in both the visual and the audio modalities. This novel process (Multimodal Shot Pruning, or MSP) is algebraically modelled as a multimodal matrix Column Subset Selection Problem, which is solved using an evolutionary computing approach. Subsequently, disorienting editing effects induced by summarization are dealt with, through manipulation of

  19. Depth measurement using monocular stereo vision system: aspect of spatial discretization

    Science.gov (United States)

    Xu, Zheng; Li, Chengjin; Zhao, Xunjie; Chen, Jiabo

    2010-11-01

    The monocular stereo vision system, consisting of single camera with controllable focal length, can be used in 3D reconstruction. Applying the system for 3D reconstruction, must consider effects caused by digital camera. There are two possible methods to make the monocular stereo vision system. First one the distance between the target object and the camera image plane is constant and lens moves. The second method assumes that the lens position is constant and the image plane moves in respect to the target. In this paper mathematical modeling of two approaches is presented. We focus on iso-disparity surfaces to define the discretization effect on the reconstructed space. These models are implemented and simulated on Matlab. The analysis is used to define application constrains and limitations of these methods. The results can be also used to enhance the accuracy of depth measurement.

  20. Full optical characterization of autostereoscopic 3D displays using local viewing angle and imaging measurements

    Science.gov (United States)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2012-03-01

    Two commercial auto-stereoscopic 3D displays are characterized a using Fourier optics viewing angle system and an imaging video-luminance-meter. One display has a fixed emissive configuration and the other adapts its emission to the observer position using head tracking. For a fixed emissive condition, three viewing angle measurements are performed at three positions (center, right and left). Qualified monocular and binocular viewing spaces in front of the display are deduced as well as the best working distance. The imaging system is then positioned at this working distance and crosstalk homogeneity on the entire surface of the display is measured. We show that the crosstalk is generally not optimized on all the surface of the display. Display aspect simulation using viewing angle measurements allows understanding better the origin of those crosstalk variations. Local imperfections like scratches and marks generally increase drastically the crosstalk, demonstrating that cleanliness requirements for this type of display are quite critical.

  1. Human skeleton proportions from monocular data

    Institute of Scientific and Technical Information of China (English)

    PENG En; LI Ling

    2006-01-01

    This paper introduces a novel method for estimating the skeleton proportions ofa human figure from monocular data.The proposed system will first automatically extract the key frames and recover the perspective camera model from the 2D data.The human skeleton proportions are then estimated from the key frames using the recovered camera model without posture reconstruction. The proposed method is tested to be simple, fast and produce satisfactory results for the input data. The human model with estimated proportions can be used in future research involving human body modeling or human motion reconstruction.

  2. Toward Simultaneous Visual Comfort and Depth Sensation Optimization for Stereoscopic 3-D Experience.

    Science.gov (United States)

    Shao, Feng; Lin, Weisi; Li, Zhutuan; Jiang, Gangyi; Dai, Qionghai

    2016-10-20

    Visual comfort and depth sensation are two important incongruent counterparts in determining the overall stereoscopic 3-D experience. In this paper, we proposed a novel simultaneous visual comfort and depth sensation optimization approach for stereoscopic images. The main motivation of the proposed optimization approach is to enhance the overall stereoscopic 3-D experience. Toward this end, we propose a two-stage solution to address the optimization problem. In the first layer-independent disparity adjustment process, we iteratively adjust the disparity range of each depth layer to satisfy with visual comfort and depth sensation constraints simultaneously. In the following layer-dependent disparity process, disparity adjustment is implemented based on a defined total energy function built with intra-layer data, inter-layer data and just noticeable depth difference terms. Experimental results on perceptually uncomfortable and comfortable stereoscopic images demonstrate that in comparison with the existing methods, the proposed method can achieve a reasonable performance balance between visual comfort and depth sensation, leading to promising overall stereoscopic 3-D experience.

  3. Traveling via Rome through the Stereoscope: Reality, Memory, and Virtual Travel

    Directory of Open Access Journals (Sweden)

    Douglas M. Klahr

    2016-06-01

    Full Text Available Underwood and Underwood’s 'Rome through the Stereoscope' of 1902 was a landmark in stereoscopic photography publishing, both as an intense, visually immersive experience and as a cognitively demanding exercise. The set consisted of a guidebook, forty-six stereographs, and five maps whose notations enabled the reader/viewer to precisely replicate the location and orientation of the photographer at each site. Combined with the extensive narrative within the guidebook, the maps and images guided its users through the city via forty-six sites, whether as an example of armchair travel or an actual travel companion. The user’s experience is examined and analyzed within the following parameters: the medium of stereoscopic photography, narrative, geographical imagination, and memory, bringing forth issues of movement, survey and route frames of reference, orientation, visualization, immersion, and primary versus secondary memories. 'Rome through the Stereoscope' was an example of virtual travel, and the process of fusing dual images into one — stereoscopic synthesis — further demarcated the experience as a virtual environment.

  4. Evaluation of multidimensional stereoscopic vision after intraocular lens implantation in patients with cataract%白内障人工晶状体植入术后多维立体视评估

    Institute of Scientific and Technical Information of China (English)

    龙潭; 马挺; 梁厚成

    2014-01-01

    目的 比较双眼白内障患者单眼及双眼不同类型人工晶状体(IOL)植入术后多维立体视的变化.方法 回顾性分析双眼白内障摘出联合IOL植入术患者的数据.比较不同类型IOL植入术后各阶立体视的差异,并比较单眼术后和双眼术后各阶立体视的变化.对影响各阶立体视的因素进行Logistic回归分析.结果 双眼白内障患者在行双眼手术后比单眼手术后各阶立体视均有改善,差异具有统计学意义.单眼术后立体视与患者年龄和双眼间最佳矫正视力(BCVA)差相关,年龄越小、双眼间BCVA差值越小,则立体视越好.IOL的不同类型对立体视无显著影响.结论 不同类型IOL植入术后均可获得良好的立体视,且差异无统计学意义.由于不同患者对立体视的需求不同,在单眼术后可依据双眼BCVA差值选择另眼手术的时机.%Objective To compare the multidimensional stereoscopic vision in cataract patients after monocular and binocular implantation of intraocular lens (IOL) of different type.Methods Data of binocular cataract patients who received cataract extraction and IOL implantation were retrospectivly analyzed.The efficacy of different types of IOL on multidimensional stereoscopic vision was compared postoperatively.The changes of multidimensional stereoscopic vision after monocular and binocular cataract surgery were compared.And the factors influencing multidimensional stereoscopic vision were analyzed with Logistic regression analysis.Results Patients with binocular IOL achieved better multidimensional stereoscopic vision than patients with monocular IOL.After monocular implantation of IOL,the multidimensional stereoscopic vision was correlated with age and the difference of best corrected visual acuity (BCVA) between the two eyes.The patients with younger age and smaller difference of BCVA between the two eyes had better stereoscopic vision.There was no statistic significance of difference between

  5. Stereoscopic Retrieval of Smoke Plume Heights and Motion from Space-Based Multi-Angle Imaging, Using the MISR INteractive eXplorer(MINX)

    Science.gov (United States)

    Nelson, David L.; Kahn, Ralph A.

    2014-01-01

    Airborne particles desert dust, wildfire smoke, volcanic effluent, urban pollution affect Earth's climate as well as air quality and health. They are found in the atmosphere all over the planet, but vary immensely in amount and properties with season and location. Most aerosol particles are injected into the near-surface boundary layer, but some, especially wildfire smoke, desert dust and volcanic ash, can be injected higher into the atmosphere, where they can stay aloft longer, travel farther, produce larger climate effects, and possibly affect human and ecosystem health far downwind. So monitoring aerosol injection height globally can make important contributions to climate science and air quality studies. The Multi-angle Imaging Spectro-Radiometer (MISR) is a space borne instrument designed to study Earths clouds, aerosols, and surface. Since late February 2000 it has been retrieving aerosol particle amount and properties, as well as cloud height and wind data, globally, about once per week. The MINX visualization and analysis tool complements the operational MISR data products, enabling users to retrieve heights and winds locally for detailed studies of smoke plumes, at higher spatial resolution and with greater precision than the operational product and other space-based, passive remote sensing techniques. MINX software is being used to provide plume height statistics for climatological studies as well as to investigate the dynamics of individual plumes, and to provide parameterizations for climate modeling.

  6. Determination of cloud-top height from stereoscopic observation

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A new and accurate method is presented based on the cloud movement (height and position), the spherical and plane triangular relationships of the spacecraft, the center of the earth, the projected-cloud and the true-cloud for determination of cloud-top height and position. Synthetic stereo images that have spatial resolution of 1.25 km from a single satellite are used to test this method. It is demonstrated that the cloud-top structure can be determined from the stereoscopic measurements of geo-synchronous satellite with vertical accuracy of approximately 500 m. The vertical accuracy can be better with lower orbiters.

  7. Optimal Backlight Modulation With Crosstalk Control in Stereoscopic Display

    DEFF Research Database (Denmark)

    Jiao, Liangbao; Shu, Xiao; Cheng, Yong

    2014-01-01

    Crosstalk between the left-eye and right-eye images is one of the main artifacts affecting the visual quality of stereoscopic liquid crystal display (LCD) systems. In this paper, a novel technique, called Optimal Backlight Modulation (OBM), is proposed to reduce crosstalk by taking the advantage....... A simple closed-form approximation of the optimization problem can be easily employed and solved in real time on LCD control hardware. Simulation results show that the proposed OBM algorithm provides the same or higher luminance while reducing the crosstalk by 60% compared with the other tested methods....

  8. Flow Mapping of a Jet in Crossflow with Stereoscopic PIV

    DEFF Research Database (Denmark)

    Meyer, Knud Erik; Özcan, Oktay; Westergaard, C. H.

    2002-01-01

    Stereoscopic Particle Image Velocimetry (PIV) has been used to make a three-dimensional flow mapping of a jet in crossflow. The Reynolds number based on the free stream velocity and the jet diameter was nominally 2400. A jet-to-crossflow velocity ratio of 3.3 was used. Details of the formation...... of the counter rotating vortex pair found behind the jet are shown. The vortex pair results in two regions with strong reversed velocities behind the jet trajectory. Regions of high turbulent kinetic energy are identified. The signature of the unsteady shear layer vortices is found in the mean vorticity field....

  9. [Electronic eikonometer: Measurement tests displayed on stereoscopic screen].

    Science.gov (United States)

    Bourdy, C; James, Y

    2016-05-01

    We propose the presentation on a stereoscopic screen of the electronic eikonometer tests intended for analysis and measurement of perceptual effects of binocular disparity. These tests, so-called "built-in magnification tests" are constructed according to the same principle as those of preceding eikonometers (disparity variation parameters being included in each test presentation, which allows, for test observation and measurements during the examination, the removing of any intermediate optical system). The images of these tests are presented separately to each eye, according to active or passive stereoscopic screen technology: (1) Ogle Spatial Test to measure aniseikonia; (2) Fixation Disparity test: binocular nonius; (3) retinal correspondence test evaluated by nonius horopter; (4) stereoscopic test using Julesz' random-dot stereograms (RDS). All of these tests, with their variable parameters included, are preprogrammed by means of an associated mini-computer. This new system (a single screen for the presentation of tests for the right eye and left eye) will be much simpler to reproduce and install for all practitioners interested in the functional exploration of binocular vision. We develop the suitable methodology adapted to each type of examination, as well as manipulations to be performed by the operator. We then recall the possibilities for reducing aniseikonia thanks to some theoretical studies previously performed by matrix calculation of the size of the retinal images for different types of eye (emmetropia, axial or conformation anisometropia, aphakia) and for different means of correction (glasses, contact lenses, implants). Software for achieving these different tests is available, on request, at this address: eiconometre.electronique@gmail.com. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  10. Stereoscopic HDTV Research at NHK Science and Technology Research Laboratories

    CERN Document Server

    Yamanoue, Hirokazu; Nojiri, Yuji

    2012-01-01

    This book focuses on the two psychological factors of naturalness and ease of viewing of three-dimensional high-definition television (3D HDTV) images. It has been said that distortions peculiar to stereoscopic images, such as the “puppet theater” effect or the “cardboard” effect, spoil the sense of presence. Whereas many earlier studies have focused on geometrical calculations about these distortions, this book instead describes the relationship between the naturalness of reproduced 3D HDTV images and the nonlinearity of depthwise reproduction. The ease of viewing of each scene is regarded as one of the causal factors of visual fatigue. Many of the earlier studies have been concerned with the accurate extraction of local parallax; however, this book describes the typical spatiotemporal distribution of parallax in 3D images. The purpose of the book is to examine the correlations between the psychological factors and amount of characteristics of parallax distribution in order to understand the characte...

  11. Stereoscopic Vascular Models of the Head and Neck: A Computed Tomography Angiography Visualization

    Science.gov (United States)

    Cui, Dongmei; Lynch, James C.; Smith, Andrew D.; Wilson, Timothy D.; Lehman, Michael N.

    2016-01-01

    Computer-assisted 3D models are used in some medical and allied health science schools; however, they are often limited to online use and 2D flat screen-based imaging. Few schools take advantage of 3D stereoscopic learning tools in anatomy education and clinically relevant anatomical variations when teaching anatomy. A new approach to teaching…

  12. Measurement of mean rotation and strain-rate tensors by using stereoscopic PIV

    DEFF Research Database (Denmark)

    Özcan, Oktay; Meyer, Knud Erik; Larsen, Poul Scheel

    2005-01-01

    A technique is described for measuring the mean velocity gradient (rate-of-displacement) tensor by using a conventional stereoscopic particle image velocimetry (SPIV) system. Planar measurement of the mean vorticity vector, rate-of-rotation and rate-of-strain tensors and the production of turbulent...

  13. Crosstalk reduction in stereoscopic 3D displays: disparity adjustment using crosstalk visibility index for crosstalk cancellation.

    Science.gov (United States)

    Sohn, Hosik; Jung, Yong Ju; Man Ro, Yong

    2014-02-10

    Stereoscopic displays provide viewers with a truly fascinating viewing experience. However, current stereoscopic displays suffer from crosstalk that is detrimental to image quality, depth quality, and visual comfort. In order to reduce the perceived crosstalk in stereoscopic displays, this paper proposes a crosstalk reduction method that combines disparity adjustment and crosstalk cancellation. The main idea of the proposed method is to displace the visible crosstalk using the disparity adjustment in a way that less amounts of intensity leakage occur on perceptually important regions in a scene. To this purpose, we estimate a crosstalk visibility index map for the scene that represents pixel-by-pixel importance values associated with the amount of perceived crosstalk and negative-after-effects of the crosstalk cancellation. Based on the crosstalk visibility index, we introduce a new disparity adjustment method that reduces the annoying crosstalk in processed images, which is followed by the crosstalk cancellation. The effectiveness of the proposed method has been successfully evaluated by subjective assessments of image quality and viewing preference. Experimental results demonstrate that the proposed method effectively improves the image quality and overall viewing quality of stereoscopic videos.

  14. Stereoscopic video analysis of Anopheles gambiae behavior in the field: challenges and opportunities

    Science.gov (United States)

    Advances in our ability to localize and track individual swarming mosquitoes in the field via stereoscopic image analysis have enabled us to test long standing ideas about individual male behavior and directly observe coupling. These studies further our fundamental understanding of the reproductive ...

  15. Quality Assessment of Perceptual Crosstalk on Two-View Auto-Stereoscopic Displays.

    Science.gov (United States)

    Kim, Jongyoo; Kim, Taewan; Lee, Sanghoon; Bovik, Alan Conrad

    2017-10-01

    Crosstalk is one of the most severe factors affecting the perceived quality of stereoscopic 3D images. It arises from a leakage of light intensity between multiple views, as in auto-stereoscopic displays. Well-known determinants of crosstalk include the co-location contrast and disparity of the left and right images, which have been dealt with in prior studies. However, when a natural stereo image that contains complex naturalistic spatial characteristics is viewed on an auto-stereoscopic display, other factors may also play an important role in the perception of crosstalk. Here, we describe a new way of predicting the perceived severity of crosstalk, which we call the Binocular Perceptual Crosstalk Predictor (BPCP). BPCP uses measurements of three complementary 3D image properties (texture, structural duplication, and binocular summation) in combination with two well-known factors (co-location contrast and disparity) to make predictions of crosstalk on two-view auto-stereoscopic displays. The new BPCP model includes two masking algorithms and a binocular pooling method. We explore a new masking phenomenon that we call duplicated structure masking, which arises from structural correlations between the original and distorted objects. We also utilize an advanced binocular summation model to develop a binocular pooling algorithm. Our experimental results indicate that BPCP achieves high correlations against subjective test results, improving upon those delivered by previous crosstalk prediction models.

  16. Temporal aspects of stereoscopic slant estimation: An evaluation and extension of Howard and Kaneko's theory

    NARCIS (Netherlands)

    Ee, R. van; Erkelens, Casper J.

    2001-01-01

    We investigated temporal aspects of stereoscopically perceived slant produced by the following transformations: horizontal scale, horizontal shear, vertical scale, vertical shear, divergence and rotation, between the half-images of a stereogram. Six subjects viewed large field stimuli (70 deg diamet

  17. Stereoscopic visualization and haptic technology used to create a virtual environment for remote surgery - biomed 2011.

    Science.gov (United States)

    Bornhoft, J M; Strabala, K W; Wortman, T D; Lehman, A C; Oleynikov, D; Farritor, S M

    2011-01-01

    The objective of this research is to study the effectiveness of using a stereoscopic visualization system for performing remote surgery. The use of stereoscopic vision has become common with the advent of the da Vinci® system (Intuitive, Sunnyvale CA). This system creates a virtual environment that consists of a 3-D display for visual feedback and haptic tactile feedback, together providing an intuitive environment for remote surgical applications. This study will use simple in vivo robotic surgical devices and compare the performance of surgeons using the stereoscopic interfacing system to the performance of surgeons using one dimensional monitors. The stereoscopic viewing system consists of two cameras, two monitors, and four mirrors. The cameras are mounted to a multi-functional miniature in vivo robot; and mimic the depth perception of the actual human eyes. This is done by placing the cameras at a calculated angle and distance apart. Live video streams from the left and right cameras are displayed on the left and right monitors, respectively. A system of angled mirrors allows the left and right eyes to see the video stream from the left and right monitor, respectively, creating the illusion of depth. The haptic interface consists of two PHANTOM Omni® (SensAble, Woburn Ma) controllers. These controllers measure the position and orientation of a pen-like end effector with three degrees of freedom. As the surgeon uses this interface, they see a 3-D image and feel force feedback for collision and workspace limits. The stereoscopic viewing system has been used in several surgical training tests and shows a potential improvement in depth perception and 3-D vision. The haptic system accurately gives force feedback that aids in surgery. Both have been used in non-survival animal surgeries, and have successfully been used in suturing and gallbladder removal. Bench top experiments using the interfacing system have also been conducted. A group of participants completed

  18. Reversible monocular cataract simulating amaurosis fugax.

    Science.gov (United States)

    Paylor, R R; Selhorst, J B; Weinberg, R S

    1985-07-01

    In a patient having brittle, juvenile-onset diabetes, transient monocular visual loss occurred repeatedly whenever there were wide fluctuations in serum glucose. Amaurosis fugax was suspected. The visual loss differed, however, in that it persisted over a period of hours to several days. Direct observation eventually revealed that the relatively sudden change in vision of one eye was associated with opacification of the lens and was not accompanied by an afferent pupillary defect. Presumably, a hyperosmotic gradient had developed with the accumulation of glucose and sorbitol within the lens. Water was drawn inward, altering the composition of the lens fibers and thereby lowering the refractive index, forming a reversible cataract. Hypoglycemia is also hypothesized to have played a role in the formation of a higher osmotic gradient. The unilaterality of the cataract is attributed to variation in the permeability of asymmetric posterior subcapsular cataracts.

  19. 21 CFR 886.1880 - Fusion and stereoscopic target.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Fusion and stereoscopic target. 886.1880 Section... (CONTINUED) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1880 Fusion and stereoscopic target. (a) Identification. A fusion and stereoscopic target is a device intended for use as a viewing...

  20. Real-time arbitrary view synthesis method for ultra-HD auto-stereoscopic display

    Science.gov (United States)

    Cai, Yuanfa; Sang, Xinzhu; Duo, Chen; Zhao, Tianqi; Fan, Xin; Guo, Nan; Yu, Xunbo; Yan, Binbin

    2013-08-01

    An arbitrary view synthesis method from 2D-Plus-Depth image for real-time auto-stereoscopic display is presented. Traditional methods use depth image based rendering (DIBR) technology, which is a process of synthesizing "virtual" views of a scene from still or moving images and associated per-pixel depth information. All the virtual view images are generated and then the ultimate stereo-image is synthesized. DIBR can greatly decrease the number of reference images and is flexible and efficient as the depth images are used. However it causes some problems such as the appearance of holes in the rendered image, and the occurrence of depth discontinuity on the surface of the object at virtual image plane. Here, reversed disparity shift pixel rendering is used to generate the stereo-image directly, and the target image won't generate holes. To avoid duplication of calculation and also to be able to match with any specific three-dimensional display, a selecting table is designed to pick up appropriate virtual viewpoints for auto-stereoscopic display. According to the selecting table, only sub-pixels of the appropriate virtual viewpoints are calculated, so calculation amount is independent of the number of virtual viewpoints. In addition, 3D image warping technology is used to translate depth information to parallax between virtual viewpoints and parallax, and the viewer can adjust the zero-parallax-setting-plane (ZPS) and change parallax conveniently to suit his/her personal preferences. The proposed method is implemented with OPENGL and demonstrated on a laptop computer with a 2.3 GHz Intel Core i5 CPU and NVIDA GeForce GT540m GPU. We got a frame rate 30 frames per second with 4096×2340 video. High synthesis efficiency and good stereoscopic sense can be obtained. The presented method can meet the requirements of real-time ultra-HD super multi-view auto-stereoscopic display.

  1. Measuring method for the object pose based on monocular vision technology

    Science.gov (United States)

    Sun, Changku; Zhang, Zimiao; Wang, Peng

    2010-11-01

    Position and orientation estimation of the object, which can be widely applied in the fields as robot navigation, surgery, electro-optic aiming system, etc, has an important value. The monocular vision positioning algorithm which is based on the point characteristics is studied and new measurement method is proposed in this paper. First, calculate the approximate coordinates of the five reference points which can be used as the initial value of iteration in the camera coordinate system according to weakp3p; Second, get the exact coordinates of the reference points in the camera coordinate system through iterative calculation with the constraints relationship of the reference points; Finally, get the position and orientation of the object. So the measurement model of monocular vision is constructed. In order to verify the accuracy of measurement model, a plane target using infrared LED as reference points is designed to finish the verification of the measurement method and the corresponding image processing algorithm is studied. And then The monocular vision experimental system is established. Experimental results show that the translational positioning accuracy reaches +/-0.05mm and rotary positioning accuracy reaches +/-0.2o .

  2. Analysis of Performance of Stereoscopic-Vision Software

    Science.gov (United States)

    Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert

    2007-01-01

    A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.

  3. Effect of monocular deprivation on rabbit neural retinal cell densities

    Directory of Open Access Journals (Sweden)

    Philip Maseghe Mwachaka

    2015-01-01

    Conclusion: In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  4. Binocular function during unequal monocular input.

    Science.gov (United States)

    Kim, Taekjun; Freeman, Ralph D

    2017-02-01

    The fine task of stereoscopic depth discrimination in human subjects requires a functional binocular system. Behavioral investigations show that relatively small binocular abnormalities can diminish stereoscopic acuity. Clinical evaluations are consistent with this observation. Neurons in visual cortex represent the first stage of processing of the binocular system. Cells at this level are generally acutely sensitive to differences in relative depth. However, an apparent paradox in previous work demonstrates that tuning for binocular disparities remains relatively constant even when large contrast differences are imposed between left and right eye stimuli. This implies a range of neural binocular function that is at odds with behavioral findings. To explore this inconsistency, we have conducted psychophysical tests by which human subjects view vertical sinusoidal gratings drifting in opposite directions to left and right eyes. If the opposite drifting gratings are integrated in visual cortex, as wave theory and neurophysiological data predict, the subjects should perceive a fused stationary grating that is counter-phasing in place. However, this behavioral combination may not occur if there are differences in contrast and therefore signal strength between left and right eye stimuli. As expected for the control condition, our results show fused counter-phase perception for equal inter-ocular grating contrasts. Our experimental tests show a striking retention of counter-phase perception even for relatively large differences in inter-ocular contrast. This finding demonstrates that binocular integration, although relatively coarse, can occur during substantial differences in left and right eye signal strength. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  5. Localization of monocular stimuli in different depth planes.

    Science.gov (United States)

    Shimono, Koichi; Tam, Wa James; Asakura, Nobuhiko; Ohmi, Masao

    2005-09-01

    We examined the phenomenon in which two physically aligned monocular stimuli appear to be non-collinear when each of them is located in binocular regions that are at different depth planes. Using monocular bars embedded in binocular random-dot areas that are at different depths, we manipulated properties of the binocular areas and examined their effect on the perceived direction and depth of the monocular stimuli. Results showed that (1) the relative visual direction and perceived depth of the monocular bars depended on the binocular disparity and the dot density of the binocular areas, and (2) the visual direction, but not the depth, depended on the width of the binocular regions. These results are consistent with the hypothesis that monocular stimuli are treated by the visual system as binocular stimuli that have acquired the properties of their binocular surrounds. Moreover, partial correlation analysis suggests that the visual system utilizes both the disparity information of the binocular areas and the perceived depth of the monocular bars in determining the relative visual direction of the bars.

  6. Visual discomfort in stereoscopic dsplays: a review

    NARCIS (Netherlands)

    Lambooij, M.T.M.; IJsselsteijn, W.; Heynderickx, I.

    2007-01-01

    Visual discomfort has been the subject of considerable research in relation to stereoscopic and autostereoscopic displays, but remains an ambiguous concept used to denote a variety of subjective symptoms potentially related to different underlying processes. In this paper we clarify the importance o

  7. Matte painting in stereoscopic synthetic imagery

    Science.gov (United States)

    Eisenmann, Jonathan; Parent, Rick

    2010-02-01

    While there have been numerous studies concerning human perception in stereoscopic environments, rules of thumb for cinematography in stereoscopy have not yet been well-established. To that aim, we present experiments and results of subject testing in a stereoscopic environment, similar to that of a theater (i.e. large flat screen without head-tracking). In particular we wish to empirically identify thresholds at which different types of backgrounds, referred to in the computer animation industry as matte paintings, can be used while still maintaining the illusion of seamless perspective and depth for a particular scene and camera shot. In monoscopic synthetic imagery, any type of matte painting that maintains proper perspective lines, depth cues, and coherent lighting and textures saves in production costs while still maintaining the illusion of an alternate cinematic reality. However, in stereoscopic synthetic imagery, a 2D matte painting that worked in monoscopy may fail to provide the intended illusion of depth because the viewer has added depth information provided by stereopsis. We intend to observe two stereoscopic perceptual thresholds in this study which will provide practical guidelines indicating when to use each of three types of matte paintings. We ran subject tests in two virtual testing environments, each with varying conditions. Data were collected showing how the choices of the users matched the correct response, and the resulting perceptual threshold patterns are discussed below.

  8. Stereoscopic display in a slot machine

    Science.gov (United States)

    Laakso, M.

    2012-03-01

    This paper reports the results of a user trial with a slot machine equipped with a stereoscopic display. The main research question was to find out what kind of added value does stereoscopic 3D (S-3D) bring to slot games? After a thorough literature survey, a novel gaming platform was designed and implemented. Existing multi-game slot machine "Nova" was converted to "3DNova" by replacing the monitor with an S-3D display and converting six original games to S-3D format. To evaluate the system, several 3DNova machines were put available for players for four months. Both qualitative and quantitative analysis was carried out from statistical values, questionnaires and observations. According to the results, people find the S-3D concept interesting but the technology is not optimal yet. Young adults and adults were fascinated by the system, older people were more cautious. Especially the need to wear stereoscopic glasses provide a challenge; ultimate system would probably use autostereoscopic technology. Also the games should be designed to utilize its full power. The main contributions of this paper are lessons learned from creating an S-3D slot machine platform and novel information about human factors related to stereoscopic slot machine gaming.

  9. Visual storytelling in 2D and stereoscopic 3D video: effect of blur on visual attention

    Science.gov (United States)

    Huynh-Thu, Quan; Vienne, Cyril; Blondé, Laurent

    2013-03-01

    Visual attention is an inherent mechanism that plays an important role in the human visual perception. As our visual system has limited capacity and cannot efficiently process the information from the entire visual field, we focus our attention on specific areas of interest in the image for detailed analysis of these areas. In the context of media entertainment, the viewers' visual attention deployment is also influenced by the art of visual storytelling. To this date, visual editing and composition of scenes in stereoscopic 3D content creation still mostly follows those used in 2D. In particular, out-of-focus blur is often used in 2D motion pictures and photography to drive the viewer's attention towards a sharp area of the image. In this paper, we study specifically the impact of defocused foreground objects on visual attention deployment in stereoscopic 3D content. For that purpose, we conducted a subjective experiment using an eyetracker. Our results bring more insights on the deployment of visual attention in stereoscopic 3D content viewing, and provide further understanding on visual attention behavior differences between 2D and 3D. Our results show that a traditional 2D scene compositing approach such as the use of foreground blur does not necessarily produce the same effect on visual attention deployment in 2D and 3D. Implications for stereoscopic content creation and visual fatigue are discussed.

  10. Global localization from monocular SLAM on a mobile phone.

    Science.gov (United States)

    Ventura, Jonathan; Arth, Clemens; Reitmayr, Gerhard; Schmalstieg, Dieter

    2014-04-01

    We propose the combination of a keyframe-based monocular SLAM system and a global localization method. The SLAM system runs locally on a camera-equipped mobile client and provides continuous, relative 6DoF pose estimation as well as keyframe images with computed camera locations. As the local map expands, a server process localizes the keyframes with a pre-made, globally-registered map and returns the global registration correction to the mobile client. The localization result is updated each time a keyframe is added, and observations of global anchor points are added to the client-side bundle adjustment process to further refine the SLAM map registration and limit drift. The end result is a 6DoF tracking and mapping system which provides globally registered tracking in real-time on a mobile device, overcomes the difficulties of localization with a narrow field-of-view mobile phone camera, and is not limited to tracking only in areas covered by the offline reconstruction.

  11. 3D environment capture from monocular video and inertial data

    Science.gov (United States)

    Clark, R. Robert; Lin, Michael H.; Taylor, Colin J.

    2006-02-01

    This paper presents experimental methods and results for 3D environment reconstruction from monocular video augmented with inertial data. One application targets sparsely furnished room interiors, using high quality handheld video with a normal field of view, and linear accelerations and angular velocities from an attached inertial measurement unit. A second application targets natural terrain with manmade structures, using heavily compressed aerial video with a narrow field of view, and position and orientation data from the aircraft navigation system. In both applications, the translational and rotational offsets between the camera and inertial reference frames are initially unknown, and only a small fraction of the scene is visible in any one video frame. We start by estimating sparse structure and motion from 2D feature tracks using a Kalman filter and/or repeated, partial bundle adjustments requiring bounded time per video frame. The first application additionally incorporates a weak assumption of bounding perpendicular planes to minimize a tendency of the motion estimation to drift, while the second application requires tight integration of the navigational data to alleviate the poor conditioning caused by the narrow field of view. This is followed by dense structure recovery via graph-cut-based multi-view stereo, meshing, and optional mesh simplification. Finally, input images are texture-mapped onto the 3D surface for rendering. We show sample results from multiple, novel viewpoints.

  12. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    Directory of Open Access Journals (Sweden)

    Robinson Larry R

    2009-10-01

    Full Text Available Abstract Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis. However, when a person views a conventional 2-D (two-dimensional image representation of a 3-D (three-dimensional scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center in setting up a low-cost, full-colour stereoscopic 3-D system.

  13. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    Science.gov (United States)

    Boulos, Maged N.K.; Robinson, Larry R.

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  14. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes.

    Science.gov (United States)

    Boulos, Maged N Kamel; Robinson, Larry R

    2009-10-22

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  15. General stereoscopic distortion rectification due to arbitrary viewer motion in binocular stereoscopic display

    Science.gov (United States)

    Li, Qun; Schonfeld, Dan

    2014-03-01

    Background: In binocular stereoscopic display, stereoscopic distortions due to viewer motion, such as depth distortion, shear distortion, and rotation distortion, result in misperception of the stereo content and reduce visual comfort dramat­ ically. In the past, perceived depth distortion has been thoroughly addressed, and shear distortion has been investigated within the context of multi-view display to accommodate motion parallax. However, the impact of rotation distortion has barely been studied. Therefore, no technique is available to address stereoscopic distortions due to general viewer motion. Objective: To preserve an undistorted 3D perception from a fixed viewpoint irrespective of viewing position. Method: We propose a unified system and method that rectifies stereoscopic distortion due to general affine viewer motion and delivers a fixed perspective of the 3D scene without distortion irrespective of viewer motion. The system assumes eye tracking of the viewer and pixel-wisely adjusts the display location of the stereo pair based on tracked viewer eye location. Results: For demonstration purpose, we implement our method on controlling perceived depth in binocular stereoscopic display of red and cyan anaglyph 3D. The user first perceives the designed perspective of the 3D scene at the reference position. The user then moves to 6 different positions with various distances and angles relative to the screen. At all positions, the users report to perceive a much more consistent stereo content with the adjusted displays and at the same time, experience improved visual comfort. Novelty: We address stereoscopic distortions with a goal to maintain a fixed perspective of the stereo scene, and propose a unified solution that simultaneously rectifies the stereoscopic distortions resulted from arbitrary viewer motion.

  16. Optical characterization of auto-stereoscopic 3D displays: interest of the resolution and comparison to human eye properties

    Science.gov (United States)

    Boher, Pierre; Leroux, Thierry; Bignon, Thibault; Collomb-Patton, Véronique

    2014-02-01

    Optical characterization of multi-view auto-stereoscopic displays is realized using high angular resolution viewing angle measurements and imaging measurements. View to view and global qualified binocular viewing space are computed from viewing angle measurements and verified using imaging measurements. Crosstalk uniformity is also deduced and related to display imperfections.

  17. Separating monocular and binocular neural mechanisms mediating chromatic contextual interactions.

    Science.gov (United States)

    D'Antona, Anthony D; Christiansen, Jens H; Shevell, Steven K

    2014-04-17

    When seen in isolation, a light that varies in chromaticity over time is perceived to oscillate in color. Perception of that same time-varying light may be altered by a surrounding light that is also temporally varying in chromaticity. The neural mechanisms that mediate these contextual interactions are the focus of this article. Observers viewed a central test stimulus that varied in chromaticity over time within a larger surround that also varied in chromaticity at the same temporal frequency. Center and surround were presented either to the same eye (monocular condition) or to opposite eyes (dichoptic condition) at the same frequency (3.125, 6.25, or 9.375 Hz). Relative phase between center and surround modulation was varied. In both the monocular and dichoptic conditions, the perceived modulation depth of the central light depended on the relative phase of the surround. A simple model implementing a linear combination of center and surround modulation fit the measurements well. At the lowest temporal frequency (3.125 Hz), the surround's influence was virtually identical for monocular and dichoptic conditions, suggesting that at this frequency, the surround's influence is mediated primarily by a binocular neural mechanism. At higher frequencies, the surround's influence was greater for the monocular condition than for the dichoptic condition, and this difference increased with temporal frequency. Our findings show that two separate neural mechanisms mediate chromatic contextual interactions: one binocular and dominant at lower temporal frequencies and the other monocular and dominant at higher frequencies (6-10 Hz).

  18. The effect of contrast on monocular versus binocular reading performance.

    Science.gov (United States)

    Johansson, Jan; Pansell, Tony; Ygge, Jan; Seimyr, Gustaf Öqvist

    2014-05-14

    The binocular advantage in reading performance is typically small. On the other hand research shows binocular reading to be remarkably robust to degraded stimulus properties. We hypothesized that this robustness may stem from an increasing binocular contribution. The main objective was to compare monocular and binocular performance at different stimulus contrasts and assess the level of binocular superiority. A secondary objective was to assess any asymmetry in performance related to ocular dominance. In a balanced repeated measures experiment 18 subjects read texts at three levels of contrast monocularly and binocularly while their eye movements were recorded. The binocular advantage increased with reduced contrast producing a 7% slower monocular reading at 40% contrast, 9% slower at 20% contrast, and 21% slower at 10% contrast. A statistically significant interaction effect was found in fixation duration displaying a more adverse effect in the monocular condition at lowest contrast. No significant effects of ocular dominance were observed. The outcome suggests that binocularity contributes increasingly to reading performance as stimulus contrast decreases. The strongest difference between monocular and binocular performance was due to fixation duration. The findings may pose a clinical point that it may be necessary to consider tests at different contrast levels when estimating reading performance. © 2014 ARVO.

  19. Hazard detection with a monocular bioptic telescope.

    Science.gov (United States)

    Doherty, Amy L; Peli, Eli; Luo, Gang

    2015-09-01

    The safety of bioptic telescopes for driving remains controversial. The ring scotoma, an area to the telescope eye due to the telescope magnification, has been the main cause of concern. This study evaluates whether bioptic users can use the fellow eye to detect in hazards driving videos that fall in the ring scotoma area. Twelve visually impaired bioptic users watched a series of driving hazard perception training videos and responded as soon as they detected a hazard while reading aloud letters presented on the screen. The letters were placed such that when reading them through the telescope the hazard fell in the ring scotoma area. Four conditions were tested: no bioptic and no reading, reading without bioptic, reading with a bioptic that did not occlude the fellow eye (non-occluding bioptic), and reading with a bioptic that partially-occluded the fellow eye. Eight normally sighted subjects performed the same task with the partially occluding bioptic detecting lateral hazards (blocked by the device scotoma) and vertical hazards (outside the scotoma) to further determine the cause-and-effect relationship between hazard detection and the fellow eye. There were significant differences in performance between conditions: 83% of hazards were detected with no reading task, dropping to 67% in the reading task with no bioptic, to 50% while reading with the non-occluding bioptic, and 34% while reading with the partially occluding bioptic. For normally sighted, detection of vertical hazards (53%) was significantly higher than lateral hazards (38%) with the partially occluding bioptic. Detection of driving hazards is impaired by the addition of a secondary reading like task. Detection is further impaired when reading through a monocular telescope. The effect of the partially-occluding bioptic supports the role of the non-occluded fellow eye in compensating for the ring scotoma. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  20. Psychometric Assessment of Stereoscopic Head-Mounted Displays

    Science.gov (United States)

    2016-06-29

    Journal Article 3. DATES COVERED (From – To) Jan 2015 - Dec 2015 4. TITLE AND SUBTITLE PSYCHOMETRIC ASSESSMENT OF STEREOSCOPIC HEAD- MOUNTED DISPLAYS...disparity. This paper details the psychometric validation of the stereoscopic rendering of a virtual environment using game-based simulation software...mounted display, near eye display, stereo display, stereo HMD, psychometric assessment, stereoscopic performance, eye-limited stereo vision. 16

  1. Ernst Mach and the episode of the monocular depth sensations.

    Science.gov (United States)

    Banks, E C

    2001-01-01

    Although Ernst Mach is widely recognized in psychology for his discovery of the effects of lateral inhibition in the retina ("Mach Bands"), his contributions to the theory of depth perception are not as well known. Mach proposed that steady luminance gradients triggered sensations of depth. He also expanded on Ewald Hering's hypothesis of "monocular depth sensations," arguing that they were subject to the same principle of lateral inhibition as light sensations were. Even after Hermann von Helmholtz's attack on Hering in 1866, Mach continued to develop theories involving the monocular depth sensations, proposing an explanation of perspective drawings in which the mutually inhibiting depth sensations scaled to a mean depth. Mach also contemplated a theory of stereopsis in which monocular depth perception played the primary role. Copyright 2001 John Wiley & Sons, Inc.

  2. Mobile Target Tracking Based on Hybrid Open-Loop Monocular Vision Motion Control Strategy

    Directory of Open Access Journals (Sweden)

    Cao Yuan

    2015-01-01

    Full Text Available This paper proposes a new real-time target tracking method based on the open-loop monocular vision motion control. It uses the particle filter technique to predict the moving target’s position in an image. Due to the properties of the particle filter, the method can effectively master the motion behaviors of the linear and nonlinear. In addition, the method uses the simple mathematical operation to transfer the image information in the mobile target to its real coordinate information. Therefore, it requires few operating resources. Moreover, the method adopts the monocular vision approach, which is a single camera, to achieve its objective by using few hardware resources. Firstly, the method evaluates the next time’s position and size of the target in an image. Later, the real position of the objective corresponding to the obtained information is predicted. At last, the mobile robot should be controlled in the center of the camera’s vision. The paper conducts the tracking test to the L-type and the S-type and compares with the Kalman filtering method. The experimental results show that the method achieves a better tracking effect in the L-shape experiment, and its effect is superior to the Kalman filter technique in the L-type or S-type tracking experiment.

  3. A Comparison of Monocular and Binocular Depth Perception in 5- and 7-Month-Old Infants.

    Science.gov (United States)

    Granrud, Carl E.; And Others

    1984-01-01

    Compares monocular depth perception with binocular depth perception in five- to seven-month-old infants. Reaching preferences (dependent measure) observed in the monocular condition indicated sensitivity to monocular depth information. Binocular viewing resulted in a far more consistent tendency to reach for the nearer object. (Author)

  4. GIS Based Stereoscopic Visualization Technique for Weather Radar Data

    Science.gov (United States)

    Lim, S.; Jang, B. J.; Lee, K. H.; Lee, C.; Kim, W.

    2014-12-01

    As rainfall characteristic is more quixotic and localized, it is important to provide a prompt and accurate warning for public. To monitor localized heavy rainfall, a reliable disaster monitoring system with advanced remote observation technology and high-precision display system is needed. To advance even more accurate weather monitoring using weather radar, there have been growing concerns regarding the real-time changes of mapping radar observations on geographical coordinate systems along with the visualization and display methods of radar data based on spatial interpolation techniques and geographical information system (GIS). Currently, the method of simultaneously displaying GIS and radar data is widely used to synchronize the radar and ground systems accurately, and the method of displaying radar data in the 2D GIS coordinate system has been extensively used as the display method for providing weather information from weather radar. This paper proposes a realistic 3D weather radar data display technique with higher spatiotemporal resolution, which is based on the integration of 3D image processing and GIS interaction. This method is focused on stereoscopic visualization, while conventional radar image display works are based on flat or two-dimensional interpretation. Furthermore, using the proposed technique, the atmospheric change at each moment can be observed three-dimensionally at various geological locations simultaneously. Simulation results indicate that 3D display of weather radar data can be performed in real time. One merit of the proposed technique is that it can provide intuitive understanding of the influence of beam blockage by topography. Through an exact matching each 3D modeled radar beam with 3D GIS map, we can find out the terrain masked areas and accordingly it facilitates the precipitation correction from QPE underestimation caused by ground clutter filtering. It can also be expected that more accurate short-term forecasting will be

  5. Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education

    Science.gov (United States)

    Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin

    2006-02-01

    Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic

  6. The Influence of Monocular Spatial Cues on Vergence Eye Movements in Monocular and Binocular Viewing of 3-D and 2-D Stimuli.

    Science.gov (United States)

    Batvinionak, Anton A; Gracheva, Maria A; Bolshakov, Andrey S; Rozhkova, Galina I

    2015-01-01

    The influence of monocular spatial cues on the vergence eye movements was studied in two series of experiments: (I) the subjects were viewing a 3-D video and also its 2-D version-binocularly and monocularly; and (II) in binocular and monocular viewing conditions, the subjects were presented with stationary 2-D stimuli containing or not containing some monocular indications of spatial arrangement. The results of the series (I) showed that, in binocular viewing conditions, the vergence eye movements were only present in the case of 3-D but not 2-D video, while in the course of monocular viewing of 2-D video, some regular vergence eye movements could be revealed, suggesting that the occluded eye position could be influenced by the spatial organization of the scene reconstructed on the basis of the monocular depth information provided by the viewing eye. The data obtained in series (II), in general, seem to support this hypothesis. © The Author(s) 2015.

  7. Toward a stereoscopic encoder/decoder for digital cinema

    Science.gov (United States)

    Bensalma, Rafik; Larabi, Mohamed-Chaker

    2008-02-01

    The digital cinema is very challenging because it represents tomorrow way of capturing, post-producing and projecting movies. Specifications on this media are provided by DCI (Digital Cinema Initiatives) founded by the Hollywood Majors. Among the specifications we can find issues about resolution, bitrate, JPEG2000 compression Moreover, the market assumes that 3D could raise the turnover of cinema industry. The problem with is the availability of 2 streams (left and right) that double the amount of data and need adapted devices to decode and project movies. Cinema industry, represented by the stereoscopic group in SMPTE has expressed the need of having a unique master that combines two streams in one. This paper focuses on the study of the generation of a master with one of the streams and the embedment of the redundant information as metadata in JPEG2000 code-stream or MXF. The idea is to use the reference image in addition to some metadata to reconstruct the target image. The metadata represent the residual image and the contours description. Quality of reconstructed images depends on the compression ratio of the residual image. The obtained results are encouraging and the choice between JPEG2000 metadata embedding and MXF metadata still to be done.

  8. Human factors involved in perception and action in a natural stereoscopic world: an up-to-date review with guidelines for stereoscopic displays and stereoscopic virtual reality (VR)

    Science.gov (United States)

    Perez-Bayas, Luis

    2001-06-01

    In stereoscopic perception of a three-dimensional world, binocular disparity might be thought of as the most important cue to 3D depth perception. Nevertheless, in reality there are many other factors involved before the 'final' conscious and subconscious stereoscopic perception, such as luminance, contrast, orientation, color, motion, and figure-ground extraction (pop-out phenomenon). In addition, more complex perceptual factors exist, such as attention and its duration (an equivalent of 'brain zooming') in relation to physiological central vision, In opposition to attention to peripheral vision and the brain 'top-down' information in relation to psychological factors like memory of previous experiences and present emotions. The brain's internal mapping of a pure perceptual world might be different from the internal mapping of a visual-motor space, which represents an 'action-directed perceptual world.' In addition, psychological factors (emotions and fine adjustments) are much more involved in a stereoscopic world than in a flat 2D-world, as well as in a world using peripheral vision (like VR, using a curved perspective representation, and displays, as natural vision does) as opposed to presenting only central vision (bi-macular stereoscopic vision) as in the majority of typical stereoscopic displays. Here is presented the most recent and precise information available about the psycho-neuro- physiological factors involved in the perception of stereoscopic three-dimensional world, with an attempt to give practical, functional, and pertinent guidelines for building more 'natural' stereoscopic displays.

  9. More clinical observations on migraine associated with monocular visual symptoms in an Indian population

    Directory of Open Access Journals (Sweden)

    Vishal Jogi

    2016-01-01

    Full Text Available Context: Retinal migraine (RM is considered as one of the rare causes of transient monocular visual loss (TMVL and has not been studied in Indian population. Objectives: The study aims to analyze the clinical and investigational profile of patients with RM. Materials and Methods: This is an observational prospective analysis of 12 cases of TMVL fulfilling the International Classification of Headache Disorders-2nd edition (ICHD-II criteria of RM examined in Neurology and Ophthalmology Outpatient Department (OPD of Postgraduate Institute of Medical Education and Research (PGIMER, Chandigarh from July 2011 to October 2012. Results: Most patients presented in 3 rd and 4 th decade with equal sex distribution. Seventy-five percent had antecedent migraine without aura (MoA and 25% had migraine with Aura (MA. Headache was ipsilateral to visual symptoms in 67% and bilateral in 33%. TMVL preceded headache onset in 58% and occurred during headache episode in 42%. Visual symptoms were predominantly negative except in one patient who had positive followed by negative symptoms. Duration of visual symptoms was variable ranging from 30 s to 45 min. None of the patient had permanent monocular vision loss. Three patients had episodes of TMVL without headache in addition to the symptom constellation defining RM. Most of the tests done to rule out alternative causes were normal. Magnetic resonance imaging (MRI brain showed nonspecific white matter changes in one patient. Visual-evoked potential (VEP showed prolonged P100 latencies in two cases. Patent foramen ovale was detected in one patient. Conclusions: RM is a definite subtype of migraine and should remain in the ICHD classification. It should be kept as one of the differential diagnosis of transient monocular vision loss. We propose existence of "acephalgic RM" which may respond to migraine prophylaxis.

  10. Monocular 3D Reconstruction and Augmentation of Elastic Surfaces with Self-Occlusion Handling.

    Science.gov (United States)

    Haouchine, Nazim; Dequidt, Jeremie; Berger, Marie-Odile; Cotin, Stephane

    2015-12-01

    This paper focuses on the 3D shape recovery and augmented reality on elastic objects with self-occlusions handling, using only single view images. Shape recovery from a monocular video sequence is an underconstrained problem and many approaches have been proposed to enforce constraints and resolve the ambiguities. State-of-the art solutions enforce smoothness or geometric constraints, consider specific deformation properties such as inextensibility or resort to shading constraints. However, few of them can handle properly large elastic deformations. We propose in this paper a real-time method that uses a mechanical model and able to handle highly elastic objects. The problem is formulated as an energy minimization problem accounting for a non-linear elastic model constrained by external image points acquired from a monocular camera. This method prevents us from formulating restrictive assumptions and specific constraint terms in the minimization. In addition, we propose to handle self-occluded regions thanks to the ability of mechanical models to provide appropriate predictions of the shape. Our method is compared to existing techniques with experiments conducted on computer-generated and real data that show the effectiveness of recovering and augmenting 3D elastic objects. Additionally, experiments in the context of minimally invasive liver surgery are also provided and results on deformations with the presence of self-occlusions are exposed.

  11. A stereoscopic system for viewing the temporal evolution of brain activity clusters in response to linguistic stimuli.

    Science.gov (United States)

    Forbes, Angus; Villegas, Javier; Almryde, Kyle R; Plante, Elena

    2014-03-06

    In this paper, we present a novel application, 3D+Time Brain View, for the stereoscopic visualization of functional Magnetic Resonance Imaging (fMRI) data gathered from participants exposed to unfamiliar spoken languages. An analysis technique based on Independent Component Analysis (ICA) is used to identify statistically significant clusters of brain activity and their changes over time during different testing sessions. That is, our system illustrates the temporal evolution of participants' brain activity as they are introduced to a foreign language through displaying these clusters as they change over time. The raw fMRI data is presented as a stereoscopic pair in an immersive environment utilizing passive stereo rendering. The clusters are presented using a ray casting technique for volume rendering. Our system incorporates the temporal information and the results of the ICA into the stereoscopic 3D rendering, making it easier for domain experts to explore and analyze the data.

  12. Performance studies of the new stereoscopic Sum-Trigger-II of MAGIC after one year of operation

    CERN Document Server

    Dazzi, F; Lopez, M; Nakajima, D; Garcia, J Rodriguez; Schweizer, T

    2015-01-01

    MAGIC is a stereoscopic system of two Imaging Air Cherenkov Telescopes (IACTs) located at La Palma (Canary Islands, Spain) and working in the field of very high energy gamma-ray astronomy. It makes use of a traditional digital trigger with an energy threshold of around 55 GeV. A novel trigger strategy, based on the analogue sum of signals from partially overlapped patches of pixels, leads to a lower threshold. In 2008, this principle was proven by the detection of the Crab Pulsar at 25 GeV by MAGIC in single telescope operation. During Winter 2013/14, a new system, based on this concept, was implemented for stereoscopic observations after several years of development. In this contribution the strategy of the operative stereoscopic trigger system, as well as the first performance studies, are presented. Finally, some possible future improvements to further reduce the energy threshold of this trigger are addressed.

  13. Evidence of basal temporo-occipital cortex involvement in stereoscopic vision in humans: a study with subdural electrode recordings.

    Science.gov (United States)

    Gonzalez, Francisco; Relova, José Luis; Prieto, Angel; Peleteiro, Manuel

    2005-01-01

    Stereoscopic vision is based on small differences in both retinal images known as retinal disparities. We investigated the cortical responses to retinal disparities in a patient suffering from occipital epilepsy by recording evoked potentials to random dot stereograms (RDS) from subdural electrodes placed in the parieto-occipito-temporal junction, medial surface of the occipital lobe (pericalcarine cortex) and basal surface of the occipital and temporal lobes (fusiform gyrus). Clear responses to disparity present in RDS were found in the fusiform cortex. We observed that the fusiform responses discriminate the onset from the offset of the stimulus, correlation from uncorrelation, and they show a longer latency than responses found in the pericalcarine cortex. Our findings indicate that the fusiform area is involved in the processing of the stereoscopic information and shows responses that suggest a high level of stereoscopic processing.

  14. A stereoscopic system for viewing the temporal evolution of brain activity clusters in response to linguistic stimuli

    Science.gov (United States)

    Forbes, Angus; Villegas, Javier; Almryde, Kyle R.; Plante, Elena

    2014-03-01

    In this paper, we present a novel application, 3D+Time Brain View, for the stereoscopic visualization of functional Magnetic Resonance Imaging (fMRI) data gathered from participants exposed to unfamiliar spoken languages. An analysis technique based on Independent Component Analysis (ICA) is used to identify statistically significant clusters of brain activity and their changes over time during different testing sessions. That is, our system illustrates the temporal evolution of participants' brain activity as they are introduced to a foreign language through displaying these clusters as they change over time. The raw fMRI data is presented as a stereoscopic pair in an immersive environment utilizing passive stereo rendering. The clusters are presented using a ray casting technique for volume rendering. Our system incorporates the temporal information and the results of the ICA into the stereoscopic 3D rendering, making it easier for domain experts to explore and analyze the data.

  15. Evaluation of monoscopic and stereoscopic displays for visual-spatial tasks in medical contexts.

    Science.gov (United States)

    Martinez Escobar, Marisol; Junke, Bethany; Holub, Joseph; Hisley, Kenneth; Eliot, David; Winer, Eliot

    2015-06-01

    In the medical field, digital images are present in diagnosis, pre-operative planning, minimally invasive surgery, instruction, and training. The use of medical digital imaging has afforded new ways to interact with a patient, such as seeing fine details inside a body. This increased usage also raises many basic research questions on human perception and performance when utilizing these images. The work presented here attempts to answer the question: How would adding the stereopsis depth cue affect relative position tasks in a medical context compared to a monoscopic view? By designing and conducting a study to isolate the benefits between monoscopic 3D and stereoscopic 3D displays in a relative position task, the following hypothesis was tested: stereoscopic 3D displays are beneficial over monoscopic 3D displays for relative position judgment tasks in a medical visualization setting. 44 medical students completed a series of relative position judgments tasks. The results show that stereoscopic condition yielded a higher score than the monoscopic condition with regard to the hypothesis.

  16. Laparoscopic stereoscopic augmented reality: toward a clinically viable electromagnetic tracking solution.

    Science.gov (United States)

    Liu, Xinyang; Kang, Sukryool; Plishker, William; Zaki, George; Kane, Timothy D; Shekhar, Raj

    2016-10-01

    The purpose of this work was to develop a clinically viable laparoscopic augmented reality (AR) system employing stereoscopic (3-D) vision, laparoscopic ultrasound (LUS), and electromagnetic (EM) tracking to achieve image registration. We investigated clinically feasible solutions to mount the EM sensors on the 3-D laparoscope and the LUS probe. This led to a solution of integrating an externally attached EM sensor near the imaging tip of the LUS probe, only slightly increasing the overall diameter of the probe. Likewise, a solution for mounting an EM sensor on the handle of the 3-D laparoscope was proposed. The spatial image-to-video registration accuracy of the AR system was measured to be [Formula: see text] and [Formula: see text] for the left- and right-eye channels, respectively. The AR system contributed 58-ms latency to stereoscopic visualization. We further performed an animal experiment to demonstrate the use of the system as a visualization approach for laparoscopic procedures. In conclusion, we have developed an integrated, compact, and EM tracking-based stereoscopic AR visualization system, which has the potential for clinical use. The system has been demonstrated to achieve clinically acceptable accuracy and latency. This work is a critical step toward clinical translation of AR visualization for laparoscopic procedures.

  17. Short-Term Monocular Deprivation Enhances Physiological Pupillary Oscillations.

    Science.gov (United States)

    Binda, Paola; Lunghi, Claudia

    2017-01-01

    Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark) and task requirements (minimizing body and gaze movements), slow pupil oscillations, "hippus," spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry). This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure) provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

  18. Parallax error in the monocular head-mounted eye trackers

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Witzner Hansen, Dan

    2012-01-01

    This paper investigates the parallax error, which is a common problem of many video-based monocular mobile gaze trackers. The parallax error is defined and described using the epipolar geometry in a stereo camera setup. The main parameters that change the error are introduced and it is shown how...

  19. Monocular SLAM for Autonomous Robots with Enhanced Features Initialization

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2014-04-01

    Full Text Available This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM, a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

  20. Monocular SLAM for autonomous robots with enhanced features initialization.

    Science.gov (United States)

    Guerra, Edmundo; Munguia, Rodrigo; Grau, Antoni

    2014-04-02

    This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI) framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM), a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

  1. Short-Term Monocular Deprivation Enhances Physiological Pupillary Oscillations

    Directory of Open Access Journals (Sweden)

    Paola Binda

    2017-01-01

    Full Text Available Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark and task requirements (minimizing body and gaze movements, slow pupil oscillations, “hippus,” spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry. This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

  2. Stereoscopic comparison as the long-lost secret to microscopically detailed illumination like the Book of Kells'.

    Science.gov (United States)

    Cisne, John L

    2009-01-01

    The idea that the seventh- and eighth-century illuminators of the finest few Insular manuscripts had a working knowledge of stereoscopic images (otherwise an eighteenth- and nineteenth-century discovery) helps explain how they could create singularly intricate, microscopically detailed designs at least five centuries before the earliest known artificial lenses of even spectacle quality. An important clue to this long-standing problem is that interlace patterns drawn largely freehand in lines spaced as closely as several per millimeter repeat so exactly across whole pages that repetitions can be free-fused to form microscopically detailed stereoscopic images whose relief in some instances indicates precision unsurpassed in astronomical instruments until the Renaissance. Spacings between repetitions commonly harmonize closely enough with normal interpupillary distances that copying disparities can be magnified tens of times in the stereoscopic relief of the images. The proposed explanation: to copy a design, create a pattern, or perfect a design's template, the finest illuminators worked by successive approximation, using their presumably unaided eyes first as a camera lucida to fill a measured grid with multiple copies from a design, and then as a stereocomparator to detect and minimize disparities between repetitions by minimizing the relief of stereoscopic images, in the manner of a Howard-Dolman stereoacuity test done in reverse.

  3. Optimal control of set-up margins and internal margins for intra and extracranial radiotherapy using stereoscopic kilo voltage imaging; Controle optimal des incertitudes de positionnement externes et internes lors d'irradiations craniennes et extracraniennes par imagerie stereoscopique de basse energie

    Energy Technology Data Exchange (ETDEWEB)

    Verellen, D.; Soete, G.; Linthout, N.; Tournel, K.; Storme, G. [Vrije Universiteit Brussel (AZ-VUB), Dept. of Radiotherapy, Oncology Center, Academic Hospital, Brussels (Belgium)

    2006-09-15

    In this paper the clinical introduction of stereoscopic kV-imaging in combination with a 6 degrees-of-freedom (6 DOF) robotics system and breathing synchronized irradiation will be discussed in view of optimally reducing inter-fractional as well as intra-fractional geometric uncertainties in conformal radiation therapy. Extracranial cases represent approximately 70% of the patient population on the NOVALIS treatment machine (BrainLAB A.G., Germany) at the AZ-VUB, which is largely due to the efficiency of the real-time positioning features of the kV-imaging system. The prostate case will be used as an example of those target volumes showing considerable changes in position from day-to-day, yet with negligible motion during the actual course of the treatment. As such it will be used to illustrate the on-line target localization using kV-imaging and 6 DOF patient adjustment with and without implanted radio-opaque markers prior to treatment. Small lung lesion will be used to illustrate the system's potential to synchronize the irradiation with breathing in coping with intra-fractional organ motion. (authors)

  4. The Modelling of Stereoscopic 3D Scene Acquisition

    Directory of Open Access Journals (Sweden)

    M. Hasmanda

    2012-04-01

    Full Text Available The main goal of this work is to find a suitable method for calculating the best setting of a stereo pair of cameras that are viewing the scene to enable spatial imaging. The method is based on a geometric model of a stereo pair cameras currently used for the acquisition of 3D scenes. Based on selectable camera parameters and object positions in the scene, the resultant model allows calculating the parameters of the stereo pair of images that influence the quality of spatial imaging. For the purpose of presenting the properties of the model of a simple 3D scene, an interactive application was created that allows, in addition to setting the cameras and scene parameters and displaying the calculated parameters, also displaying the modelled scene using perspective views and the stereo pair modelled with the aid of anaglyphic images. The resulting modelling method can be used in practice to determine appropriate parameters of the camera configuration based on the known arrangement of the objects in the scene. Analogously, it can, for a given camera configuration, determine appropriate geometrical limits of arranging the objects in the scene being displayed. This method ensures that the resulting stereoscopic recording will be of good quality and observer-friendly.

  5. Identification of depth information with stereoscopic mammography using different display methods

    Science.gov (United States)

    Morikawa, Takamitsu; Kodera, Yoshie

    2013-03-01

    Stereoscopy in radiography was widely used in the late 80's because it could be used for capturing complex structures in the human body, thus proving beneficial for diagnosis and screening. When radiologists observed the images stereoscopically, radiologists usually needed the training of their eyes in order to perceive the stereoscopic effect. However, with the development of three-dimensional (3D) monitors and their use in the medical field, only a visual inspection is no longer required in the medical field. The question then arises as to whether there is any difference in recognizing depth information when using conventional methods and that when using a 3D monitor. We constructed a phantom and evaluated the difference in capacity to identify the depth information between the two methods. The phantom consists of acryl steps and 3mm diameter acryl pillars on the top and bottom of each step. Seven observers viewed these images stereoscopically using the two display methods and were asked to judge the direction of the pillar that was on the top. We compared these judged direction with the direction of the real pillar arranged on the top, and calculated the percentage of correct answerers (PCA). The results showed that PCA obtained using the 3D monitor method was higher PCA by about 5% than that obtained using the naked-eye method. This indicated that people could view images stereoscopically more precisely using the 3D monitor method than when using with conventional methods, like the crossed or parallel eye viewing. We were able to estimate the difference in capacity to identify the depth information between the two display methods.

  6. An HTML Tool for Production of Interactive Stereoscopic Compositions.

    Science.gov (United States)

    Chistyakov, Alexey; Soto, Maria Teresa; Martí, Enric; Carrabina, Jordi

    2016-12-01

    The benefits of stereoscopic vision in medical applications were appreciated and have been thoroughly studied for more than a century. The usage of the stereoscopic displays has a proven positive impact on performance in various medical tasks. At the same time the market of 3D-enabled technologies is blooming. New high resolution stereo cameras, TVs, projectors, monitors, and head mounted displays become available. This equipment, completed with a corresponding application program interface (API), could be relatively easy implemented in a system. Such complexes could open new possibilities for medical applications exploiting the stereoscopic depth. This work proposes a tool for production of interactive stereoscopic graphical user interfaces, which could represent a software layer for web-based medical systems facilitating the stereoscopic effect. Further the tool's operation mode and the results of the conducted subjective and objective performance tests will be exposed.

  7. Clinically Normal Stereopsis Does Not Ensure Performance Benefit from Stereoscopic 3D Depth Cues

    Science.gov (United States)

    2014-10-28

    participants (NVIDIA Personal GeForce 3D Vision Active Shutter Glasses, and Samsung SyncMaster 2233RZ). This display was a 22-inch diagonal LCD display with...The display was a 22-inch diagonal 120Hz LCD, with a resolution of 1680 x 1050. Image adapted from Samsung Syncmaster and NVidia GeForce...camera separation on the viewing experience of stereoscopic images.” Journal of Electronic Imaging, 21(1), p. 011011-1. Landers, D. D., & Cormack, L

  8. Phase-only stereoscopic hologram calculation based on Gerchberg-Saxton iterative algorithm

    Science.gov (United States)

    Xia, Xinyi; Xia, Jun

    2016-09-01

    A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg-Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. Project supported by the National Basic Research Program of China (Grant No. 2013CB328803) and the National High Technology Research and Development Program of China (Grant Nos. 2013AA013904 and 2015AA016301).

  9. Stereoscopic representation of the breast from two mammographic view with external markers

    Science.gov (United States)

    Kallergi, Maria; Manohar, Anand

    2003-05-01

    A new breast imaging technique has been develoepd and tested for the stereoscopic representation of the breast. The method uses markers at specific locations on the breast surface and standard mammographic projections and was tested with an anthropomorphic phantom containing five mass-like objects at locations determined by a CT scan. The phantom was imaged with a GE Senographe 2000D digital system with and without the markers. The algorithm's modules included: 1) Breast area segmentation; 2) Pectoral muscle segmentation; 3) Registration and alignment of the mammographic projections based on selected reference points; 4) Breast volume estimation basdd on volume conservation principle during compression and shape definition using surface points; 5) 3D lesion(s) localization and representation. An interactive, ILD-based, graphical interface was also developed for the stereoscopic display of the breast. The reconstruction algorithm assumed that the breast shrinks and stretches uniformly when compression is applied and removed. The relative movement of the markers after compression allowed more accurate estimation of the shrinking and stretching of the surface offering a relatively simple and practical way to improve volume estimation and surface reconstruction. Such stereoscopic representation of the breast and associated findings may improve radiological interpretation and physical examinations for breast cancer diagnosis.

  10. CT virtual endoscopy and 3D stereoscopic visualisation in the evaluation of coronary stenting.

    Science.gov (United States)

    Sun, Z; Lawrence-Brown

    2009-10-01

    The aim of this case report is to present the additional value provided by CT virtual endoscopy and 3D stereoscopic visualisation when compared with 2D visualisations in the assessment of coronary stenting. A 64-year old patient was treated with left coronary stenting 8 years ago and recently followed up with multidetector row CT angiography. An in-stent restenosis of the left coronary artery was suspected based on 2D axial and multiplanar reformatted images. 3D virtual endoscopy was generated to demonstrate the smooth intraluminal surface of coronary artery wall, and there was no evidence of restenosis or intraluminal irregularity. Virtual fly-through of the coronary artery was produced to examine the entire length of the coronary artery with the aim of demonstrating the intraluminal changes following placement of the coronary stent. In addition, stereoscopic views were generated to show the relationship between coronary artery branches and the coronary stent. In comparison with traditional 2D visualisations, virtual endoscopy was useful for assessment of the intraluminal appearance of the coronary artery wall following coronary stent implantation, while stereoscopic visualisation improved observers' understanding of the complex cardiac structures. Thus, both methods could be used as a complementary tool in cardiac imaging.

  11. Research and Construction Lunar Stereoscopic Visualization System Based on Chang'E Data

    Science.gov (United States)

    Gao, Xingye; Zeng, Xingguo; Zhang, Guihua; Zuo, Wei; Li, ChunLai

    2017-04-01

    With lunar exploration activities carried by Chang'E-1, Chang'E-2 and Chang'E-3 lunar probe, a large amount of lunar data has been obtained, including topographical and image data covering the whole moon, as well as the panoramic image data of the spot close to the landing point of Chang'E-3. In this paper, we constructed immersive virtual moon system based on acquired lunar exploration data by using advanced stereoscopic visualization technology, which will help scholars to carry out research on lunar topography, assist the further exploration of lunar science, and implement the facilitation of lunar science outreach to the public. In this paper, we focus on the building of lunar stereoscopic visualization system with the combination of software and hardware by using binocular stereoscopic display technology, real-time rendering algorithm for massive terrain data, and building virtual scene technology based on panorama, to achieve an immersive virtual tour of the whole moon and local moonscape of Chang'E-3 landing point.

  12. Asymmetric Stereoscopic Video Coding Algorithm Based on Stereoscopic Masking Effect%基于立体掩蔽效应的非对称立体视频编码算法

    Institute of Scientific and Technical Information of China (English)

    张冠军; 郁梅; 廖义

    2014-01-01

    It is inconvenient for stereoscopic video to be stored and transported due to huge amount of data, thus the compression efficiency must be further improved to reduce the transmission bit rate. The relationship of just noticeable distortion among stereoscopic video channels is analyzed, an asymmetric stereoscopic video coding scheme based on stereoscopic masking effect is proposed in this paper. The experimental results show that the proposed scheme can reduce the right viewpoint bit rate by 11.45%-18.69%with decoded reconstructed image maintaining almost the same subjective quality. The proposed algorithm achieves better stereoscope video compression performance compared with the traditional stereoscope video coding scheme.%由于立体视频的数据量巨大,不便存储和传输,因此必须进一步提高其压缩效率,降低传输的码率.文章分析了立体视频左右通道间恰可察觉失真的关系,基于立体掩蔽效应提出了非对称立体视频编码算法.实验结果表明:在解码重建图像主观质量基本不变的前提下,右视点视频编码的码率节约了11.45%~18.69%.与传统立体视频编码模型相比,该算法可以获得更好的立体视频压缩性能.

  13. Stereo improves 3D shape discrimination even when rich monocular shape cues are available.

    Science.gov (United States)

    Lee, Young Lim; Saunders, Jeffrey A

    2011-08-17

    We measured the ability to discriminate 3D shapes across changes in viewpoint and illumination based on rich monocular 3D information and tested whether the addition of stereo information improves shape constancy. Stimuli were images of smoothly curved, random 3D objects. Objects were presented in three viewing conditions that provided different 3D information: shading-only, stereo-only, and combined shading and stereo. Observers performed shape discrimination judgments for sequentially presented objects that differed in orientation by rotation of 0°-60° in depth. We found that rotation in depth markedly impaired discrimination performance in all viewing conditions, as evidenced by reduced sensitivity (d') and increased bias toward judging same shapes as different. We also observed a consistent benefit from stereo, both in conditions with and without change in viewpoint. Results were similar for objects with purely Lambertian reflectance and shiny objects with a large specular component. Our results demonstrate that shape perception for random 3D objects is highly viewpoint-dependent and that stereo improves shape discrimination even when rich monocular shape cues are available.

  14. Higher resolution stimulus facilitates depth perception: MT+ plays a significant role in monocular depth perception.

    Science.gov (United States)

    Tsushima, Yoshiaki; Komine, Kazuteru; Sawahata, Yasuhito; Hiruma, Nobuyuki

    2014-10-20

    Today, we human beings are facing with high-quality virtual world of a completely new nature. For example, we have a digital display consisting of a high enough resolution that we cannot distinguish from the real world. However, little is known how such high-quality representation contributes to the sense of realness, especially to depth perception. What is the neural mechanism of processing such fine but virtual representation? Here, we psychophysically and physiologically examined the relationship between stimulus resolution and depth perception, with using luminance-contrast (shading) as a monocular depth cue. As a result, we found that a higher resolution stimulus facilitates depth perception even when the stimulus resolution difference is undetectable. This finding is against the traditional cognitive hierarchy of visual information processing that visual input is processed continuously in a bottom-up cascade of cortical regions that analyze increasingly complex information such as depth information. In addition, functional magnetic resonance imaging (fMRI) results reveal that the human middle temporal (MT+) plays a significant role in monocular depth perception. These results might provide us with not only the new insight of our neural mechanism of depth perception but also the future progress of our neural system accompanied by state-of- the-art technologies.

  15. Measurements of turbulent premixed flame dynamics using cinema stereoscopic PIV

    Energy Technology Data Exchange (ETDEWEB)

    Steinberg, Adam M.; Driscoll, James F. [University of Michigan, Department of Aerospace Engineering, Ann Arbor, MI (United States); Ceccio, Steven L. [University of Michigan, Department of Mechanical Engineering, Ann Arbor, MI (United States)

    2008-06-15

    A new experimental method is described that provides high-speed movies of turbulent premixed flame wrinkling dynamics and the associated vorticity fields. This method employs cinema stereoscopic particle image velocimetry and has been applied to a turbulent slot Bunsen flame. Three-component velocity fields were measured with high temporal and spatial resolutions of 0.9 ms and 140{mu}m, respectively. The flame-front location was determined using a new multi-step method based on particle image gradients, which is described. Comparisons are made between flame fronts found with this method and simultaneous CH-PLIF images. These show that the flame contour determined corresponds well to the true location of maximum gas density gradient. Time histories of typical eddy-flame interactions are reported and several important phenomena identified. Outwardly rotating eddy pairs wrinkle the flame and are attenuated at they pass through the flamelet. Significant flame-generated vorticity is produced downstream of the wrinkled tip. Similar wrinkles are caused by larger groups of outwardly rotating eddies. Inwardly rotating pairs cause significant convex wrinkles that grow as the flame propagates. These wrinkles encounter other eddies that alter their behavior. The effects of the hydrodynamic and diffusive instabilities are observed and found to be significant contributors to the formation and propagation of wrinkles. (orig.)

  16. SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion

    Science.gov (United States)

    von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger

    2015-04-01

    The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to

  17. The effect of monocular depth cues on the detection of moving objects by moving observers.

    Science.gov (United States)

    Royden, Constance S; Parsons, Daniel; Travatello, Joshua

    2016-07-01

    An observer moving through the world must be able to identify and locate moving objects in the scene. In principle, one could accomplish this task by detecting object images moving at a different angle or speed than the images of other items in the optic flow field. While angle of motion provides an unambiguous cue that an object is moving relative to other items in the scene, a difference in speed could be due to a difference in the depth of the objects and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects. We found that thresholds for detection of object motion decreased as we increased the number of depth cues available to the observer.

  18. Stereoscopic camera and viewing systems with undistorted depth presentation and reduced or eliminated erroneous acceleration and deceleration perceptions, or with perceptions produced or enhanced for special effects

    Science.gov (United States)

    Diner, Daniel B. (Inventor)

    1991-01-01

    Methods for providing stereoscopic image presentation and stereoscopic configurations using stereoscopic viewing systems having converged or parallel cameras may be set up to reduce or eliminate erroneously perceived accelerations and decelerations by proper selection of parameters, such as an image magnification factor, q, and intercamera distance, 2w. For converged cameras, q is selected to be equal to Ve - qwl = 0, where V is the camera distance, e is half the interocular distance of an observer, w is half the intercamera distance, and l is the actual distance from the first nodal point of each camera to the convergence point, and for parallel cameras, q is selected to be equal to e/w. While converged cameras cannot be set up to provide fully undistorted three-dimensional views, they can be set up to provide a linear relationship between real and apparent depth and thus minimize erroneously perceived accelerations and decelerations for three sagittal planes, x = -w, x = 0, and x = +w which are indicated to the observer. Parallel cameras can be set up to provide fully undistorted three-dimensional views by controlling the location of the observer and by magnification and shifting of left and right images. In addition, the teachings of this disclosure can be used to provide methods of stereoscopic image presentation and stereoscopic camera configurations to produce a nonlinear relation between perceived and real depth, and erroneously produce or enhance perceived accelerations and decelerations in order to provide special effects for entertainment, training, or educational purposes.

  19. Stereoscopic Machine-Vision System Using Projected Circles

    Science.gov (United States)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  20. Efficient streaming of stereoscopic depth-based 3D videos

    Science.gov (United States)

    Temel, Dogancan; Aabed, Mohammed; Solh, Mashhour; AlRegib, Ghaassan

    2013-02-01

    In this paper, we propose a method to extract depth from motion, texture and intensity. We first analyze the depth map to extract a set of depth cues. Then, based on these depth cues, we process the colored reference video, using texture, motion, luminance and chrominance content, to extract the depth map. The processing of each channel in the YCRCB-color space is conducted separately. We tested this approach on different video sequences with different monocular properties. The results of our simulations show that the extracted depth maps generate a 3D video with quality close to the video rendered using the ground truth depth map. We report objective results using 3VQM and subjective analysis via comparison of rendered images. Furthermore, we analyze the savings in bitrate as a consequence of eliminating the need for two video codecs, one for the reference color video and one for the depth map. In this case, only the depth cues are sent as a side information to the color video.

  1. Disseminated neurocysticercosis presenting as isolated acute monocular painless vision loss

    Directory of Open Access Journals (Sweden)

    Gaurav M Kasundra

    2014-01-01

    Full Text Available Neurocysticercosis, the most common parasitic infection of the nervous system, is known to affect the brain, eyes, muscular tissues and subcutaneous tissues. However, it is very rare for patients with ocular cysts to have concomitant cerebral cysts. Also, the dominant clinical manifestation of patients with cerebral cysts is either seizures or headache. We report a patient who presented with acute monocular painless vision loss due to intraocular submacular cysticercosis, who on investigation had multiple cerebral parenchymal cysticercal cysts, but never had any seizures. Although such a vision loss after initiation of antiparasitic treatment has been mentioned previously, acute monocular vision loss as the presenting feature of ocular cysticercosis is rare. We present a brief review of literature along with this case report.

  2. The effect of induced monocular blur on measures of stereoacuity.

    Science.gov (United States)

    Odell, Naomi V; Hatt, Sarah R; Leske, David A; Adams, Wendy E; Holmes, Jonathan M

    2009-04-01

    To determine the effect of induced monocular blur on stereoacuity measured with real depth and random dot tests. Monocular visual acuity deficits (range, 20/15 to 20/1600) were induced with 7 different Bangerter filters (depth tests and Preschool Randot (PSR) and Distance Randot (DR) random dot tests. Stereoacuity results were grouped as either "fine" (60 and 200 arcsec to nil) stereo. Across visual acuity deficits, stereoacuity was more severely degraded with random dot (PSR, DR) than with real depth (Frisby, FD2) tests. Degradation to worse-than-fine stereoacuity consistently occurred at 0.7 logMAR (20/100) or worse for Frisby, 0.1 logMAR (20/25) or worse for PSR, and 0.1 logMAR (20/25) or worse for FD2. There was no meaningful threshold for the DR because worse-than-fine stereoacuity was associated with -0.1 logMAR (20/15). Course/nil stereoacuity was consistently associated with 1.2 logMAR (20/320) or worse for Frisby, 0.8 logMAR (20/125) or worse for PSR, 1.1 logMAR (20/250) or worse for FD2, and 0.5 logMAR (20/63) or worse for DR. Stereoacuity thresholds are more easily degraded by reduced monocular visual acuity with the use of random dot tests (PSR and DR) than real depth tests (Frisby and FD2). We have defined levels of monocular visual acuity degradation associated with fine and nil stereoacuity. These findings have important implications for testing stereoacuity in clinical populations.

  3. Building a 3D scanner system based on monocular vision.

    Science.gov (United States)

    Zhang, Zhiyi; Yuan, Lin

    2012-04-10

    This paper proposes a three-dimensional scanner system, which is built by using an ingenious geometric construction method based on monocular vision. The system is simple, low cost, and easy to use, and the measurement results are very precise. To build it, one web camera, one handheld linear laser, and one background calibration board are required. The experimental results show that the system is robust and effective, and the scanning precision can be satisfied for normal users.

  4. Monocular nasal hemianopia from atypical sphenoid wing meningioma.

    Science.gov (United States)

    Stacy, Rebecca C; Jakobiec, Frederick A; Lessell, Simmons; Cestari, Dean M

    2010-06-01

    Neurogenic monocular nasal field defects respecting the vertical midline are quite uncommon. We report a case of a unilateral nasal hemianopia that was caused by compression of the left optic nerve by a sphenoid wing meningioma. Histological examination revealed that the pathology of the meningioma was consistent with that of an atypical meningioma, which carries a guarded prognosis with increased chance of recurrence. The tumor was debulked surgically, and the patient's visual field defect improved.

  5. Indoor monocular mobile robot navigation based on color landmarks

    Institute of Scientific and Technical Information of China (English)

    LUO Yuan; ZHANG Bai-sheng; ZHANG Yi; LI Ling

    2009-01-01

    A robot landmark navigation system based on monocular camera was researched theoretically and experimentally. First the landmark setting and its data structure in programming was given; then the coordinates of them getting by robot and global localization of the robot was described; finally experiments based on Pioneer III mobile robot show that this system can work well at different topographic situation without lose of signposts.

  6. Altered anterior visual system development following early monocular enucleation

    Directory of Open Access Journals (Sweden)

    Krista R. Kelly

    2014-01-01

    Conclusions: The novel finding of an asymmetry in morphology of the anterior visual system following long-term survival from early monocular enucleation indicates altered postnatal visual development. Possible mechanisms behind this altered development include recruitment of deafferented cells by crossing nasal fibres and/or geniculate cell retention via feedback from primary visual cortex. These data highlight the importance of balanced binocular input during postnatal maturation for typical anterior visual system morphology.

  7. Optical scanning holography for stereoscopic display

    Science.gov (United States)

    Liu, Jung-Ping; Wen, Hsuan-Hsuan

    2016-10-01

    Optical Scanning Holography (OSH) is a scanning-type digital holographic recording technique. One of OSH's most important properties is that the OSH can record an incoherent hologram, which is free of speckle and thus is suitable for the applications of holographic display. The recording time of a scanning hologram is proportional to the sampling resolution. Hence the viewing angle as well as the resolution of a scanning hologram is limited for avoid too long recording. As a result, the viewing angle is not large enough for optical display. To solve this problem, we recorded two scanning holograms at different viewing angles. The two holograms are synthesized to a single stereoscopic hologram with two main viewing angles. In displaying, two views at the two main viewing angles are reconstructed. Because both views contain full-depth-resolved 3D scenes, the problem of accommodation conflict in conventional stereogram is avoided.

  8. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse- Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  9. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse-Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  10. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    Science.gov (United States)

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.

  11. Factors Affecting "Ghosting" In Time-Multiplexed Piano-Stereoscopic Crt Display Systems

    Science.gov (United States)

    Lipton, Lenny

    1987-06-01

    Two factors contributing to "ghosting" (image doubling) in plano-stereoscopic CRT displays are phosphor decay and dynamic range of the shutters. A ghosting threshold must be crossed before comfortable fusion can take place. The ghosting threshold changes as image brightness increases and with higher-contrast subjects and those with larger parallax values. Because of the defects of existing liquid crystal shutters, we developed a liquid-crystal shutter with high dynamic range, good transmission, and high speed. With these shutters, residual ghosting is a result of phosphor persistence.

  12. The effect of stimulus size on stereoscopic fusion limits and response criteria.

    Science.gov (United States)

    Grove, Philip M; Finlayson, Nonie J; Ono, Hiroshi

    2014-01-01

    The stereoscopic fusion limit denotes the largest binocular disparity for which a single fused image is perceived. Several criteria can be employed when judging whether or not a stereoscopic display is fused, and this may be a factor contributing to a discrepancy in the literature. Schor, Wood, and Ogawa (1984 Vision Research, 24, 661-665) reported that fusion limits did not change as a function of bar width, while Roumes, Plantier, Menu, and Thorpe (1997 Human Factors, 39, 359-373) reported higher fusion limits for larger stimuli than for smaller stimuli. Our investigation suggests that differing criteria between the studies could contribute to this discrepancy. In experiment 1 we measured horizontal and vertical disparity fusion limits for thin bars and for the edge of an extended surface, allowing observers to use the criterion of either diplopia or rivalry when evaluating fusion for all stimuli. Fusion limits were equal for thin bars and extended surfaces in both horizontal and vertical disparity conditions. We next measured fusion limits for a range of bar widths and instructed observers to indicate which criterion they employed on each trial. Fusion limits were constant across all stimulus widths. However, there was a sharp change in criterion from diplopia to rivalry when the angular extent of the bar width exceeded about twice the fusion limit, expressed in angular terms. We conclude that stereoscopic fusion limits do not depend on stimulus size in this context, but the criterion for fusion does. Therefore, the criterion for fusion should be clearly defined in any study measuring stereoscopic fusion limits.

  13. 立体印刷技术探究%The Exploration of Stereoscopic Printing Technology

    Institute of Scientific and Technical Information of China (English)

    秦睿睿; 许文才; 罗世永

    2012-01-01

    In order to exploration the new technique of stereoscopic printing, this paper introduces two necessary factors of grating and grating image, and summarizes three kinds of printing technology about stereoscopic printing which are fitting grating board after printing, direct grating printing and On-line grating printing. The major ways to develop the stereoscopic printing technology are to try a further improvement of the precision printing equipment and to exploit the low-cost and highpowered professional software. It also makes a development about the high-precision on-line printing process which is a direction to blossom the stereoscopic printing technology.%对立体印刷的发展进行了探究,介绍了立体成像的两个必备因素——光栅和光栅图像,对先印刷后贴合光栅板、直接光栅印刷、在线光栅印刷等3种立体印刷工艺进行了综述。强调指出,进一步提高印刷设备精度,开发价格低廉、性能优良的专业软件,是促进立体印刷技术发展的主要措施,开发高精度在线印刷工艺是立体印刷工艺发展的主要方向。

  14. Stereoscopic depth increases intersubject correlations of brain networks.

    Science.gov (United States)

    Gaebler, Michael; Biessmann, Felix; Lamke, Jan-Peter; Müller, Klaus-Robert; Walter, Henrik; Hetzer, Stefan

    2014-10-15

    Three-dimensional movies presented via stereoscopic displays have become more popular in recent years aiming at a more engaging viewing experience. However, neurocognitive processes associated with the perception of stereoscopic depth in complex and dynamic visual stimuli remain understudied. Here, we investigate the influence of stereoscopic depth on both neurophysiology and subjective experience. Using multivariate statistical learning methods, we compare the brain activity of subjects when freely watching the same movies in 2D and in 3D. Subjective reports indicate that 3D movies are more strongly experienced than 2D movies. On the neural level, we observe significantly higher intersubject correlations of cortical networks when subjects are watching 3D movies relative to the same movies in 2D. We demonstrate that increases in intersubject correlations of brain networks can serve as neurophysiological marker for stereoscopic depth and for the strength of the viewing experience.

  15. Using a high-definition stereoscopic video system to teach microscopic surgery

    Science.gov (United States)

    Ilgner, Justus; Park, Jonas Jae-Hyun; Labbé, Daniel; Westhofen, Martin

    2007-02-01

    Introduction: While there is an increasing demand for minimally invasive operative techniques in Ear, Nose and Throat surgery, these operations are difficult to learn for junior doctors and demanding to supervise for experienced surgeons. The motivation for this study was to integrate high-definition (HD) stereoscopic video monitoring in microscopic surgery in order to facilitate teaching interaction between senior and junior surgeon. Material and methods: We attached a 1280x1024 HD stereo camera (TrueVisionSystems TM Inc., Santa Barbara, CA, USA) to an operating microscope (Zeiss ProMagis, Zeiss Co., Oberkochen, Germany), whose images were processed online by a PC workstation consisting of a dual Intel® Xeon® CPU (Intel Co., Santa Clara, CA). The live image was displayed by two LCD projectors @ 1280x768 pixels on a 1,25m rear-projection screen by polarized filters. While the junior surgeon performed the surgical procedure based on the displayed stereoscopic image, all other participants (senior surgeon, nurse and medical students) shared the same stereoscopic image from the screen. Results: With the basic setup being performed only once on the day before surgery, fine adjustments required about 10 minutes extra during the operation schedule, which fitted into the time interval between patients and thus did not prolong operation times. As all relevant features of the operative field were demonstrated on one large screen, four major effects were obtained: A) Stereoscopy facilitated orientation for the junior surgeon as well as for medical students. B) The stereoscopic image served as an unequivocal guide for the senior surgeon to demonstrate the next surgical steps to the junior colleague. C) The theatre nurse shared the same image, anticipating the next instruments which were needed. D) Medical students instantly share the information given by all staff and the image, thus avoiding the need for an extra teaching session. Conclusion: High definition

  16. Accurate and Robust Attitude Estimation Using MEMS Gyroscopes and a Monocular Camera

    Science.gov (United States)

    Kobori, Norimasa; Deguchi, Daisuke; Takahashi, Tomokazu; Ide, Ichiro; Murase, Hiroshi

    In order to estimate accurate rotations of mobile robots and vehicle, we propose a hybrid system which combines a low-cost monocular camera with gyro sensors. Gyro sensors have drift errors that accumulate over time. On the other hand, a camera cannot obtain the rotation continuously in the case where feature points cannot be extracted from images, although the accuracy is better than gyro sensors. To solve these problems we propose a method for combining these sensors based on Extended Kalman Filter. The errors of the gyro sensors are corrected by referring to the rotations obtained from the camera. In addition, by using the reliability judgment of camera rotations and devising the state value of the Extended Kalman Filter, even when the rotation is not continuously observable from the camera, the proposed method shows a good performance. Experimental results showed the effectiveness of the proposed method.

  17. Detection and Tracking Strategies for Autonomous Aerial Refuelling Tasks Based on Monocular Vision

    Directory of Open Access Journals (Sweden)

    Yingjie Yin

    2014-07-01

    Full Text Available Detection and tracking strategies based on monocular vision are proposed for autonomous aerial refuelling tasks. The drogue attached to the fuel tanker aircraft has two important features. The grey values of the drogue's inner part are different from the external umbrella ribs, as shown in the image. The shape of the drogue's inner dark part is nearly circular. According to crucial prior knowledge, the rough and fine positioning algorithms are designed to detect the drogue. Particle filter based on the drogue's shape is proposed to track the drogue. A strategy to switch between detection and tracking is proposed to improve the robustness of the algorithms. The inner dark part of the drogue is segmented precisely in the detecting and tracking process and the segmented circular part can be used to measure its spatial position. The experimental results show that the proposed method has good performance in real-time and satisfied robustness and positioning accuracy.

  18. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    Directory of Open Access Journals (Sweden)

    Ki-Yeong Park

    2014-01-01

    Full Text Available We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.

  19. Indoor Mobile Robot Navigation by Central Following Based on Monocular Vision

    Science.gov (United States)

    Saitoh, Takeshi; Tada, Naoya; Konishi, Ryosuke

    This paper develops the indoor mobile robot navigation by center following based on monocular vision. In our method, based on the frontal image, two boundary lines between the wall and baseboard are detected. Then, the appearance based obstacle detection is applied. When the obstacle exists, the avoidance or stop movement is worked according to the size and position of the obstacle, and when the obstacle does not exist, the robot moves at the center of the corridor. We developed the wheelchair based mobile robot. We estimated the accuracy of the boundary line detection, and obtained fast processing speed and high detection accuracy. We demonstrate the effectiveness of our mobile robot by the stopping experiments with various obstacles and moving experiments.

  20. Real Time 3D Facial Movement Tracking Using a Monocular Camera.

    Science.gov (United States)

    Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng

    2016-07-25

    The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  1. Comparative evaluation of monocular augmented-reality display for surgical microscopes.

    Science.gov (United States)

    Rodriguez Palma, Santiago; Becker, Brian C; Lobes, Louis A; Riviere, Cameron N

    2012-01-01

    Medical augmented reality has undergone much development recently. However, there is a lack of studies quantitatively comparing the different display options available. This paper compares the effects of different graphical overlay systems in a simple micromanipulation task with "soft" visual servoing. We compared positioning accuracy in a real-time visually-guided task using Micron, an active handheld tremor-canceling microsurgical instrument, using three different displays: 2D screen, 3D screen, and microscope with monocular image injection. Tested with novices and an experienced vitreoretinal surgeon, display of virtual cues in the microscope via an augmented reality injection system significantly decreased 3D error (p < 0.05) compared to the 2D and 3D monitors when confounding factors such as magnification level were normalized.

  2. Interactive floating windows: a new technique for stereoscopic video games

    Science.gov (United States)

    Zerebecki, Chris; Stanfield, Brodie; Tawadrous, Mina; Buckstein, Daniel; Hogue, Andrew; Kapralos, Bill

    2012-03-01

    The film industry has a long history of creating compelling experiences in stereoscopic 3D. Recently, the video game as an artistic medium has matured into an effective way to tell engaging and immersive stories. Given the current push to bring stereoscopic 3D technology into the consumer market there is considerable interest to develop stereoscopic 3D video games. Game developers have largely ignored the need to design their games specifically for stereoscopic 3D and have thus relied on automatic conversion and driver technology. Game developers need to evaluate solutions used in other media, such as film, to correct perceptual problems such as window violations, and modify or create new solutions to work within an interactive framework. In this paper we extend the dynamic floating window technique into the interactive domain enabling the player to position a virtual window in space. Interactively changing the position, size, and the 3D rotation of the virtual window, objects can be made to 'break the mask' dramatically enhancing the stereoscopic effect. By demonstrating that solutions from the film industry can be extended into the interactive space, it is our hope that this initiates further discussion in the game development community to strengthen their story-telling mechanisms in stereoscopic 3D games.

  3. Optoelectronic stereoscopic device for diagnostics, treatment, and developing of binocular vision

    Science.gov (United States)

    Pautova, Larisa; Elkhov, Victor A.; Ovechkis, Yuri N.

    2003-08-01

    Operation of the device is based on alternative generation of pictures for left and right eyes on the monitor screen. Controller gives pulses on LCG so that shutter for left or right eye opens synchronously with pictures. The device provides frequency of switching more than 100 Hz, and that is why the flickering is absent. Thus, a separate demonstration of images to the left eye or to the right one in turn is obtained for patients being unaware and creates the conditions of binocular perception clsoe to natural ones without any additional separation of vision fields. LC-cell transfer characteristic coodination with time parameters of monitor screen has enabled to improve stereo image quality. Complicated problem of computer stereo images with LC-glasses is so called 'ghosts' - noise images that come to blocked eye. We reduced its influence by adapting stereo images to phosphor and LC-cells characteristics. The device is intended for diagnostics and treatment of stabismus, amblyopia and other binocular and stereoscopic vision impairments, for cultivating, training and developing of stereoscopic vision, for measurements of horizontal and vertical phoria, phusion reserves, the stereovision acuity and some else, for fixing central scotoma borders, as well as suppression scotoma in strabismus too.

  4. Stochastically optimized monocular vision-based navigation and guidance

    Science.gov (United States)

    Watanabe, Yoko

    The objective of this thesis is to design a relative navigation and guidance law for unmanned aerial vehicles, or UAVs, for vision-based control applications. The autonomous operation of UAVs has progressively developed in recent years. In particular, vision-based navigation, guidance and control has been one of the most focused on research topics for the automation of UAVs. This is because in nature, birds and insects use vision as the exclusive sensor for object detection and navigation. Furthermore, it is efficient to use a vision sensor since it is compact, light-weight and low cost. Therefore, this thesis studies the monocular vision-based navigation and guidance of UAVs. Since 2-D vision-based measurements are nonlinear with respect to the 3-D relative states, an extended Kalman filter (EKF) is applied in the navigation system design. The EKF-based navigation system is integrated with a real-time image processing algorithm and is tested in simulations and flight tests. The first closed-loop vision-based formation flight between two UAVs has been achieved, and the results are shown in this thesis to verify the estimation performance of the EKF. In addition, vision-based 3-D terrain recovery was performed in simulations to present a navigation design which has the capability of estimating states of multiple objects. In this problem, the statistical z-test is applied to solve the correspondence problem of relating measurements and estimation states. As a practical example of vision-based control applications for UAVs, a vision-based obstacle avoidance problem is specially addressed in this thesis. A navigation and guidance system is designed for a UAV to achieve a mission of waypoint tracking while avoiding unforeseen stationary obstacles by using vision information. An EKF is applied to estimate each obstacles' position from the vision-based information. A collision criteria is established by using a collision-cone approach and a time-to-go criterion. A minimum

  5. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles.

    Science.gov (United States)

    Huang, Kuo-Lung; Chiu, Chung-Cheng; Chiu, Sheng-Yi; Teng, Yao-Jen; Hao, Shu-Sheng

    2015-07-13

    The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs) has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft's nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft's nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  6. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Kuo-Lung Huang

    2015-07-01

    Full Text Available The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  7. Aerial vehicles collision avoidance using monocular vision

    Science.gov (United States)

    Balashov, Oleg; Muraviev, Vadim; Strotov, Valery

    2016-10-01

    In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

  8. Designing a high accuracy 3D auto stereoscopic eye tracking display, using a common LCD monitor

    Science.gov (United States)

    Taherkhani, Reza; Kia, Mohammad

    2012-09-01

    This paper describes the design and building of a low cost and practical stereoscopic display that does not need to wear special glasses, and uses eye tracking to give a large degree of freedom to viewer (or viewer's) movement while displaying the minimum amount of information. The parallax barrier technique is employed to turn a LCD into an auto-stereoscopic display. The stereo image pair is screened on the usual liquid crystal display simultaneously but in different columns of pixels. Controlling of the display in red-green-blue sub pixels increases the accuracy of light projecting direction to less than 2 degrees without losing too much LCD's resolution and an eye-tracking system determines the correct angle to project the images along the viewer's eye pupils and an image processing system puts the 3D images data in correct R-G-B sub pixels. 1.6 degree of light direction controlling achieved in practice. The 3D monitor is just made by applying some simple optical materials on a usual LCD display with normal resolution. [Figure not available: see fulltext.

  9. What is stereoscopic vision good for?

    Science.gov (United States)

    Read, Jenny C. A.

    2015-03-01

    Stereo vision is a resource-intensive process. Nevertheless, it has evolved in many animals including mammals, birds, amphibians and insects. It must therefore convey significant fitness benefits. It is often assumed that the main benefit is improved accuracy of depth judgments, but camouflage breaking may be as important, particularly in predatory animals. In humans, for the last 150 years, stereo vision has been turned to a new use: helping us reproduce visual reality for artistic purposes. By recreating the different views of a scene seen by the two eyes, stereo achieves unprecedented levels of realism. However, it also has some unexpected effects on viewer experience. The disruption of established mechanisms for interpreting pictures may be one reason why some viewers find stereoscopic content disturbing. Stereo vision also has uses in ophthalmology. Clinical stereoacuity tests are used in the management of conditions such as strabismus and amblyopia as well as vision screening. Stereoacuity can reveal the effectiveness of therapy and even predict long-term outcomes post surgery. Yet current clinical stereo tests fall far short of the accuracy and precision achievable in the lab. At Newcastle University, we are exploiting the recent availability of autostereo 3D tablet computers to design a clinical stereotest app in the form of a game suitable for young children. Our goal is to enable quick, accurate and precise stereoacuity measures which will enable clinicians to obtain better outcomes for children with visual disorders.

  10. Decrease in monocular sleep after sleep deprivation in the domestic chicken

    NARCIS (Netherlands)

    Boerema, AS; Riedstra, B; Strijkstra, AM

    2003-01-01

    We investigated the trade-off between sleep need and alertness, by challenging chickens to modify their monocular sleep. We sleep deprived domestic chickens (Gallus domesticus) to increase their sleep need. We found that in response to sleep deprivation the fraction of monocular sleep within sleep

  11. Decrease in monocular sleep after sleep deprivation in the domestic chicken

    NARCIS (Netherlands)

    Boerema, AS; Riedstra, B; Strijkstra, AM

    2003-01-01

    We investigated the trade-off between sleep need and alertness, by challenging chickens to modify their monocular sleep. We sleep deprived domestic chickens (Gallus domesticus) to increase their sleep need. We found that in response to sleep deprivation the fraction of monocular sleep within sleep d

  12. Display depth analyses with the wave aberration for the auto-stereoscopic 3D display

    Science.gov (United States)

    Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Chen, Duo; Chen, Zhidong; Zhang, Wanlu; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-07-01

    Because the aberration severely affects the display performances of the auto-stereoscopic 3D display, the diffraction theory is used to analyze the diffraction field distribution and the display depth through aberration analysis. Based on the proposed method, the display depth of central and marginal reconstructed images is discussed. The experimental results agree with the theoretical analyses. Increasing the viewing distance or decreasing the lens aperture can improve the display depth. Different viewing distances and the LCD with two lens-arrays are used to verify the conclusion.

  13. Measurement of mean rotation and strain-rate tensors by using stereoscopic PIV

    DEFF Research Database (Denmark)

    Özcan, Oktay; Meyer, Knud Erik; Larsen, Poul Scheel

    2005-01-01

    A technique is described for measuring the mean velocity gradient (rate-of-displacement) tensor by using a conventional stereoscopic particle image velocimetry (SPIV) system. Planar measurement of the mean vorticity vector, rate-of-rotation and rate-of-strain tensors and the production of turbulent...... kinetic energy can be accomplished. Parameters of the Q criterion and negative λ2 techniques used for vortex identification can be evaluated in the mean flow field. Experimental data obtained for a circular turbulent jet issuing normal to a crossflow in a low speed wind tunnel for a jet...

  14. Using Stereoscopic 3D Technologies for the Diagnosis and Treatment of Amblyopia in Children

    CERN Document Server

    Gargantini, Angelo

    2011-01-01

    The 3D4Amb project aims at developing a system based on the stereoscopic 3D techonlogy, like the NVIDIA 3D Vision, for the diagnosis and treatment of amblyopia in young children. It exploits the active shutter technology to provide binocular vision, i.e. to show different images to the amblyotic (or lazy) and the normal eye. It would allow easy diagnosis of amblyopia and its treatment by means of interactive games or other entertainment activities. It should not suffer from the compliance problems of the classical treatment, it is suitable to domestic use, and it could at least partially substitute occlusion or patching of the normal eye.

  15. A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM System

    Directory of Open Access Journals (Sweden)

    Antoni Grau

    2013-07-01

    Full Text Available Simultaneous localization and mapping (SLAM is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.

  16. A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.

    Science.gov (United States)

    Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni

    2013-07-03

    Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.

  17. The Effects of Stereoscopic Projection on the Understanding by Nigerian Schoolchildren of Spatial Relationships in Diagrams.

    Science.gov (United States)

    Nicholson, J. R.; And Others

    1978-01-01

    Discussion of stereoscopic and planoscopic diagrams as they relate to the level of understanding in depth relationships. Findings indicate a significantly lower understanding in stereoscopic diagrams when compared to a real three-dimensional situation. (JEG)

  18. Depth-of-Focus Affects 3D Perception in Stereoscopic Displays.

    Science.gov (United States)

    Vienne, Cyril; Blondé, Laurent; Mamassian, Pascal

    2015-01-01

    Stereoscopic systems present binocular images on planar surface at a fixed distance. They induce cues to flatness, indicating that images are presented on a unique surface and specifying the relative depth of that surface. The center of interest of this study is on a second problem, arising when a 3D object distance differs from the display distance. As binocular disparity must be scaled using an estimate of viewing distance, object depth can thus be affected through disparity scaling. Two previous experiments revealed that stereoscopic displays can affect depth perception due to conflicting accommodation and vergence cues at near distances. In this study, depth perception is evaluated for farther accommodation and vergence distances using a commercially available 3D TV. In Experiment I, we evaluated depth perception of 3D stimuli at different vergence distances for a large pool of participants. We observed a strong effect of vergence distance that was bigger for younger than for older participants, suggesting that the effect of accommodation was reduced in participants with emerging presbyopia. In Experiment 2, we extended 3D estimations by varying both the accommodation and vergence distances. We also tested the hypothesis that setting accommodation open loop by constricting pupil size could decrease the contribution of focus cues to perceived distance. We found that the depth constancy was affected by accommodation and vergence distances and that the accommodation distance effect was reduced with a larger depth-of-focus. We discuss these results with regard to the effectiveness of focus cues as a distance signal. Overall, these results highlight the importance of appropriate focus cues in stereoscopic displays at intermediate viewing distances.

  19. 多棱锥三维立体投影装置的制作%Polygonal pyramid device for three-dimensional stereoscopic projection

    Institute of Scientific and Technical Information of China (English)

    房若宇

    2015-01-01

    Based on the rectangular pyramid model of stereoscopic projection ,a fabrication method of a polygonal pyramid device for stereoscopic projection was explored ,vivid three dimensional stereo‐scopic projection was achieved .Meanwhile ,the projection source was improved to make it possible for the movement of the projected images under remote control ,thus the stereoscopic projection was not only a simple display of the source video .%基于四棱锥立体投影模型,探索了六棱锥立体投影装置的具体制作方法,实现了良好的三维立体投影。同时,在投影片源上作了改进,使得投影的图像可在遥控下移动,投影的影像不再是单纯的影片播放。

  20. A Novel Metric Online Monocular SLAM Approach for Indoor Applications

    Directory of Open Access Journals (Sweden)

    Yongfei Li

    2016-01-01

    Full Text Available Monocular SLAM has attracted more attention recently due to its flexibility and being economic. In this paper, a novel metric online direct monocular SLAM approach is proposed, which can obtain the metric reconstruction of the scene. In the proposed approach, a chessboard is utilized to provide initial depth map and scale correction information during the SLAM process. The involved chessboard provides the absolute scale of scene, and it is seen as a bridge between the camera visual coordinate and the world coordinate. The scene is reconstructed as a series of key frames with their poses and correlative semidense depth maps, using a highly accurate pose estimation achieved by direct grid point-based alignment. The estimated pose is coupled with depth map estimation calculated by filtering over a large number of pixelwise small-baseline stereo comparisons. In addition, this paper formulates the scale-drift model among key frames and the calibration chessboard is used to correct the accumulated pose error. At the end of this paper, several indoor experiments are conducted. The results suggest that the proposed approach is able to achieve higher reconstruction accuracy when compared with the traditional LSD-SLAM approach. And the approach can also run in real time on a commonly used computer.

  1. Automatic correction of hand pointing in stereoscopic depth.

    Science.gov (United States)

    Song, Yalin; Sun, Yaoru; Zeng, Jinhua; Wang, Fang

    2014-12-11

    In order to examine whether stereoscopic depth information could drive fast automatic correction of hand pointing, an experiment was designed in a 3D visual environment in which participants were asked to point to a target at different stereoscopic depths as quickly and accurately as possible within a limited time window (≤300 ms). The experiment consisted of two tasks: "depthGO" in which participants were asked to point to the new target position if the target jumped, and "depthSTOP" in which participants were instructed to abort their ongoing movements after the target jumped. The depth jump was designed to occur in 20% of the trials in both tasks. Results showed that fast automatic correction of hand movements could be driven by stereoscopic depth to occur in as early as 190 ms.

  2. Effects of Stereoscopic 3D Digital Radar Displays on Air Traffic Controller Performance

    Science.gov (United States)

    2013-03-01

    Effects of Stereoscopic 3D Digital Radar Displays on Air Traffic Controller Performance THESIS...and is not subject to copyright protection in the United States. AFIT-ENV-13-M-24 Effects of Stereoscopic 3D Digital Radar Displays on Air... Stereoscopic 3D Digital Radar Displays on Air Traffic Controller Performance Jason G. Russi Technical Sergeant, USAF Approved

  3. Monocular display unit for 3D display with correct depth perception

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  4. RBF-Based Monocular Vision Navigation for Small Vehicles in Narrow Space below Maize Canopy

    Directory of Open Access Journals (Sweden)

    Lu Liu

    2016-06-01

    Full Text Available Maize is one of the major food crops in China. Traditionally, field operations are done by manual labor, where the farmers are threatened by the harsh environment and pesticides. On the other hand, it is difficult for large machinery to maneuver in the field due to limited space, particularly in the middle and late growth stage of maize. Unmanned, compact agricultural machines, therefore, are ideal for such field work. This paper describes a method of monocular visual recognition to navigate small vehicles between narrow crop rows. Edge detection and noise elimination were used for image segmentation to extract the stalks in the image. The stalk coordinates define passable boundaries, and a simplified radial basis function (RBF-based algorithm was adapted for path planning to improve the fault tolerance of stalk coordinate extraction. The average image processing time, including network latency, is 220 ms. The average time consumption for path planning is 30 ms. The fast processing ensures a top speed of 2 m/s for our prototype vehicle. When operating at the normal speed (0.7 m/s, the rate of collision with stalks is under 6.4%. Additional simulations and field tests further proved the feasibility and fault tolerance of our method.

  5. Maximum Likelihood Estimation of Monocular Optical Flow Field for Mobile Robot Ego-motion

    Directory of Open Access Journals (Sweden)

    Huajun Liu

    2016-01-01

    Full Text Available This paper presents an optimized scheme of monocular ego-motion estimation to provide location and pose information for mobile robots with one fixed camera. First, a multi-scale hyper-complex wavelet phase-derived optical flow is applied to estimate micro motion of image blocks. Optical flow computation overcomes the difficulties of unreliable feature selection and feature matching of outdoor scenes; at the same time, the multi-scale strategy overcomes the problem of road surface self-similarity and local occlusions. Secondly, a support probability of flow vector is defined to evaluate the validity of the candidate image motions, and a Maximum Likelihood Estimation (MLE optical flow model is constructed based not only on image motion residuals but also their distribution of inliers and outliers, together with their support probabilities, to evaluate a given transform. This yields an optimized estimation of inlier parts of optical flow. Thirdly, a sampling and consensus strategy is designed to estimate the ego-motion parameters. Our model and algorithms are tested on real datasets collected from an intelligent vehicle. The experimental results demonstrate the estimated ego-motion parameters closely follow the GPS/INS ground truth in complex outdoor road scenarios.

  6. Evidence of Stereoscopic Surface Disambiguation in the Responses of V1 Neurons.

    Science.gov (United States)

    Samonds, Jason M; Tyler, Christopher W; Lee, Tai Sing

    2016-03-10

    For the important task of binocular depth perception from complex natural-image stimuli, the neurophysiological basis for disambiguating multiple matches between the eyes across similar features has remained a long-standing problem. Recurrent interactions among binocular disparity-tuned neurons in the primary visual cortex (V1) could play a role in stereoscopic computations by altering responses to favor the most likely depth interpretation for a given image pair. Psychophysical research has shown that binocular disparity stimuli displayed in 1 region of the visual field can be extrapolated into neighboring regions that contain ambiguous depth information. We tested whether neurons in macaque V1 interact in a similar manner and found that unambiguous binocular disparity stimuli displayed in the surrounding visual fields of disparity-selective V1 neurons indeed modified their responses when either bistable stereoscopic or uniform featureless stimuli were presented within their receptive field centers. The delayed timing of the response behavior compared with the timing of classical surround suppression and multiple control experiments suggests that these modulations are carried out by slower disparity-specific recurrent connections among V1 neurons. These results provide explicit evidence that the spatial interactions that are predicted by cooperative algorithms play an important role in solving the stereo correspondence problem.

  7. Stereoscopic multi-planar PIV measurements of in-cylinder tumbling flow

    Energy Technology Data Exchange (ETDEWEB)

    Buecker, I.; Karhoff, D.C.; Klaas, M.; Schroeder, W. [RWTH Aachen University, Institute of Aerodynamics, Aachen (Germany)

    2012-12-15

    The non-reacting flow field within the combustion chamber of a motored direct-injection spark-ignition engine with tumble intake port is measured. The three-dimensionality of the flow necessitates the measurement of all three velocity components via stereoscopic particle-image velocimetry in multiple planes. Phase-locked stereoscopic PIV is applied at 15 crank angles during the intake and compression strokes, showing the temporal evolution of the flow field. The flow fields are obtained within a set of 14 axial planes, covering nearly the complete cylinder volume. The stereoscopic PIV setup applied to engine in-cylinder flow and the arising problems and solutions are discussed in detail. The three-dimensional flow field is reconstructed and analyzed using vortex criteria. The tumble vortex is the dominant flow structure, and this vortex varies significantly regarding shape, strength, and position throughout the two strokes. The tumble vortex center moves clockwise through the combustion chamber. At first, the tumble has a c-shape which turns into an almost straight tube at the end of the compression. Small-scale structures are analyzed by the distribution of the turbulent kinetic energy. It is evident that the symmetry plane only represents the 3D flow field after 100 CAD. For earlier crank angles, both kinetic energy (KE) and turbulent kinetic energy (TKE) in the combustion chamber are well below the KE and TKE in the symmetry plane. This should be taken into account when the injection and breakup of the three-dimensional fuel jet are studied. The mean kinetic energy is conserved until late compression by the tumble motion. This conservation ensures through the excited air motion an enhancement of the initial air-fuel mixture which is of interest for direct-injection gasoline engines. (orig.)

  8. A low cost PSD-based monocular motion capture system

    Science.gov (United States)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  9. Monocular vision based navigation method of mobile robot

    Institute of Scientific and Technical Information of China (English)

    DONG Ji-wen; YANG Sen; LU Shou-yin

    2009-01-01

    A trajectory tracking method is presented for the visual navigation of the monocular mobile robot. The robot move along line trajectory drawn beforehand, recognized and stop on the stop-sign to finish special task. The robot uses a forward looking colorful digital camera to capture information in front of the robot, and by the use of HSI model partition the trajectory and the stop-sign out. Then the "sampling estimate" method was used to calculate the navigation parameters. The stop-sign is easily recognized and can identify 256 different signs. Tests indicate that the method can fit large-scale intensity of brightness and has more robustness and better real-time character.

  10. Monocular Visual Deprivation Suppresses Excitability in Adult Human Visual Cortex

    DEFF Research Database (Denmark)

    Lou, Astrid Rosenstand; Madsen, Kristoffer Hougaard; Paulson, Olaf Bjarne

    2011-01-01

    The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... employed TMS to trace plastic changes in adult visual cortex before, during, and after 48 h of monocular deprivation (MD) of the right dominant eye. In healthy adult volunteers, MD-induced changes in visual cortex excitability were probed with paired-pulse TMS applied to the left and right occipital cortex....... Stimulus–response curves were constructed by recording the intensity of the reported phosphenes evoked in the contralateral visual field at range of TMS intensities. Phosphene measurements revealed that MD produced a rapid and robust decrease in cortical excitability relative to a control condition without...

  11. The problem of designing a solar stereoscopic observatory

    Science.gov (United States)

    Chebotarev, V.; Grigoryev, V.; Konovalov, V.; Kosenko, V.; Papushev, P.; Uspensky, G.

    This paper presents results derived by exploring the possibilities of creating an interplanetary stereoscopic observatory to invesigate the 3D structure of solar features from granules and spicules to coronal structure. A preliminary study was made of the passive motion of two spacecraft in the vicinity of Lagrangian libration points L 4 and L 4. The version of ballistic scheme of setting-up of the system with a minimal deployment time is considered. For preliminary development of stereoscopic spacecraft the main parameters of scientific payload have been taken: mass - 600 kg, power - 1 kw, summary data - 5 Gbits per day. The chief results of this work are: (i) The stereoscopic observatories can be realized with a complete set of achieable objectives, (ii) launching of spacecraft with the mass 2000 kg into both libration points is possible by using the Soviet rocket-vehicle “Proton” in a time 1.17 year, (iii) Transmission of information from stereoscopic observatory amount 5 Gbit ber day is possible to the ground-based antenna 70 m in diameter and using, aboard the spacecraft's, a transmitting-receiving phase-array antenna of size 5 m.

  12. Increasing Range Of Apparent Depth In A Stereoscopic Display

    Science.gov (United States)

    Busquets, Anthony M.; Parrish, Russell V.; Williams, Steven P.

    1995-01-01

    Optical configuration conceived for increasing range of apparent depth provided by stereoscopic display system, without imposing concomitant reduction in field of view. Observer wears shuttered goggles synchronized with alternating left- and right-eye views on display. However, instead of looking directly at display screen, observer looks at screen via reflection in mirror collimating light emitted by screen.

  13. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    Science.gov (United States)

    Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il “Dan”

    2016-01-01

    This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. PMID:26938540

  14. A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots

    Directory of Open Access Journals (Sweden)

    Tae-Jae Lee

    2016-03-01

    Full Text Available This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.

  15. Effect of field of view and monocular viewing on angular size judgements in an outdoor scene

    Science.gov (United States)

    Denz, E. A.; Palmer, E. A.; Ellis, S. R.

    1980-01-01

    Observers typically overestimate the angular size of distant objects. Significantly, overestimations are greater in outdoor settings than in aircraft visual-scene simulators. The effect of field of view and monocular and binocular viewing conditions on angular size estimation in an outdoor field was examined. Subjects adjusted the size of a variable triangle to match the angular size of a standard triangle set at three greater distances. Goggles were used to vary the field of view from 11.5 deg to 90 deg for both monocular and binocular viewing. In addition, an unrestricted monocular and binocular viewing condition was used. It is concluded that neither restricted fields of view similar to those present in visual simulators nor the restriction of monocular viewing causes a significant loss in depth perception in outdoor settings. Thus, neither factor should significantly affect the depth realism of visual simulators.

  16. Reactivation of thalamocortical plasticity by dark exposure during recovery from chronic monocular deprivation

    Science.gov (United States)

    Montey, Karen L.; Quinlan, Elizabeth M.

    2015-01-01

    Chronic monocular deprivation induces severe amblyopia that is resistant to spontaneous reversal in adulthood. However, dark exposure initiated in adulthood reactivates synaptic plasticity in the visual cortex and promotes recovery from chronic monocular deprivation. Here we show that chronic monocular deprivation significantly decreases the strength of feedforward excitation and significantly decreases the density of dendritic spines throughout the deprived binocular visual cortex. Dark exposure followed by reverse deprivation significantly enhances the strength of thalamocortical synaptic transmission and the density of dendritic spines on principle neurons throughout the depth of the visual cortex. Thus dark exposure reactivates widespread synaptic plasticity in the adult visual cortex, including at thalamocortical synapses, during the recovery from chronic monocular deprivation. PMID:21587234

  17. Apparent motion of monocular stimuli in different depth planes with lateral head movements.

    Science.gov (United States)

    Shimono, K; Tam, W J; Ono, H

    2007-04-01

    A stationary monocular stimulus appears to move concomitantly with lateral head movements when it is embedded in a stereogram representing two front-facing rectangular areas, one above the other at two different distances. In Experiment 1, we found that the extent of perceived motion of the monocular stimulus covaried with the amplitude of head movement and the disparity between the two rectangular areas (composed of random dots). In Experiment 2, we found that the extent of perceived motion of the monocular stimulus was reduced compared to that in Experiment 1 when the rectangular areas were defined only by an outline rather than by random dots. These results are discussed using the hypothesis that a monocular stimulus takes on features of the binocular surface area in which it is embedded and is perceived as though it were treated as a binocular stimulus with regards to its visual direction and visual depth.

  18. The effect of monocular depth cues on the detection of moving objects by moving observers

    National Research Council Canada - National Science Library

    Royden, Constance S; Parsons, Daniel; Travatello, Joshua

    2016-01-01

    ... and thus is an ambiguous cue. We tested whether the addition of information about the distance of objects from the observer, in the form of monocular depth cues, aided detection of moving objects...

  19. Stereoscopic Analysis of 19 May and 31 Aug 2007 Filament Eruptions

    Science.gov (United States)

    Liewer, Paulett; DeJong, E. M.; Hall, J. R.

    2008-01-01

    The presentation outline includes results from stereoscopic analysis of SECCHI/EUVI data for 19 May 2007 filament eruption, including the determined 3D trajectory of erupting filament, strong evidence for reconnection below erupting filament as consistent with standard model, and comparison of EUVI and H-alpha images during eruption; and results from stereoscopic analytic of 21 August 2007 filament eruption. Slide topics include standard model of filament eruption; 2007 May 19 STEREO A/SECCHI/EUVI 195 and 304 A: CME signatures and filament eruption, 3D reconstruction of erupting prominence; filament's relation to coronal magnetic fields; 3d reconstructions of filament eruption; height-time plot of eruption from 3D reconstructions; detailed pre-eruptions comparison of H-alpha and EUVI 304 at 12:42 UT; comparisons during the eruption; STEREO prominence and CME August 31, 2007; reconstructions of prominence and leading edges of both dark cavity and CME; and 3D reconstructions of prominence and leading edges.

  20. Structure light telecentric stereoscopic vision 3D measurement system based on Scheimpflug condition

    Science.gov (United States)

    Mei, Qing; Gao, Jian; Lin, Hui; Chen, Yun; Yunbo, He; Wang, Wei; Zhang, Guanjin; Chen, Xin

    2016-11-01

    We designed a new three-dimensional (3D) measurement system for micro components: a structure light telecentric stereoscopic vision 3D measurement system based on the Scheimpflug condition. This system creatively combines the telecentric imaging model and the Scheimpflug condition on the basis of structure light stereoscopic vision, having benefits of a wide measurement range, high accuracy, fast speed, and low price. The system measurement range is 20 mm×13 mm×6 mm, the lateral resolution is 20 μm, and the practical vertical resolution reaches 2.6 μm, which is close to the theoretical value of 2 μm and well satisfies the 3D measurement needs of micro components such as semiconductor devices, photoelectron elements, and micro-electromechanical systems. In this paper, we first introduce the principle and structure of the system and then present the system calibration and 3D reconstruction. We then present an experiment that was performed for the 3D reconstruction of the surface topography of a wafer, followed by a discussion. Finally, the conclusions are presented.

  1. Figure and Ground in the Visual Cortex: V2 Combines Stereoscopic Cues with Gestalt Rules

    Science.gov (United States)

    Qiu, Fangtu T.; von der Heydt, Rüdiger

    2006-01-01

    Figure-ground organization is a process by which the visual system identifies some image regions as foreground and others as background, inferring three-dimensional (3D) layout from 2D displays. A recent study reported that edge responses of neurons in area V2 are selective for side-of-figure, suggesting that figure-ground organization is encoded in the contour signals (border-ownership coding). Here we show that area V2 combines two strategies of computation, one that exploits binocular stereoscopic information for the definition of local depth order, and another that exploits the global configuration of contours (gestalt factors). These are combined in single neurons so that the ‘near’ side of the preferred 3D edge generally coincides with the preferred side-of-figure in 2D displays. Thus, area V2 represents the borders of 2D figures as edges of surfaces, as if the figures were objects in 3D space. Even in 3D displays gestalt factors influence the responses and can enhance or null the stereoscopic depth information. PMID:15996555

  2. The role of monocularly visible regions in depth and surface perception.

    Science.gov (United States)

    Harris, Julie M; Wilcox, Laurie M

    2009-11-01

    The mainstream of binocular vision research has long been focused on understanding how binocular disparity is used for depth perception. In recent years, researchers have begun to explore how monocular regions in binocularly viewed scenes contribute to our perception of the three-dimensional world. Here we review the field as it currently stands, with a focus on understanding the extent to which the role of monocular regions in depth perception can be understood using extant theories of binocular vision.

  3. Comparison of Subjective Refraction under Binocular and Monocular Conditions in Myopic Subjects.

    Science.gov (United States)

    Kobashi, Hidenaga; Kamiya, Kazutaka; Handa, Tomoya; Ando, Wakako; Kawamorita, Takushi; Igarashi, Akihito; Shimizu, Kimiya

    2015-07-28

    To compare subjective refraction under binocular and monocular conditions, and to investigate the clinical factors affecting the difference in spherical refraction between the two conditions. We examined thirty eyes of 30 healthy subjects. Binocular and monocular refraction without cycloplegia was measured through circular polarizing lenses in both eyes, using the Landolt-C chart of the 3D visual function trainer-ORTe. Stepwise multiple regression analysis was used to assess the relations among several pairs of variables and the difference in spherical refraction in binocular and monocular conditions. Subjective spherical refraction in the monocular condition was significantly more myopic than that in the binocular condition (p refraction (p = 0.99). The explanatory variable relevant to the difference in spherical refraction between binocular and monocular conditions was the binocular spherical refraction (p = 0.032, partial regression coefficient B = 0.029) (adjusted R(2) = 0.230). No significant correlation was seen with other clinical factors. Subjective spherical refraction in the monocular condition was significantly more myopic than that in the binocular condition. Eyes with higher degrees of myopia are more predisposed to show the large difference in spherical refraction between these two conditions.

  4. Content and context of monocular regions determine perceived depth in random dot, unpaired background and phantom stereograms.

    Science.gov (United States)

    Grove, Philip M; Gillam, Barbara; Ono, Hiroshi

    2002-07-01

    Perceived depth was measured for three-types of stereograms with the colour/texture of half-occluded (monocular) regions either similar to or dissimilar to that of binocular regions or background. In a two-panel random dot stereogram the monocular region was filled with texture either similar or different to the far panel or left blank. In unpaired background stereograms the monocular region either matched the background or was different in colour or texture and in phantom stereograms the monocular region matched the partially occluded object or was a different colour or texture. In all three cases depth was considerably impaired when the monocular texture did not match either the background or the more distant surface. The content and context of monocular regions as well as their position are important in determining their role as occlusion cues and thus in three-dimensional layout. We compare coincidence and accidental view accounts of these effects.

  5. Development of high-frame-rate LED panel and its applications for stereoscopic 3D display

    Science.gov (United States)

    Yamamoto, H.; Tsutsumi, M.; Yamamoto, R.; Kajimoto, K.; Suyama, S.

    2011-03-01

    In this paper, we report development of a high-frame-rate LED display. Full-color images are refreshed at 480 frames per second. In order to transmit such a high frame-rate signal via conventional 120-Hz DVI, we have introduced a spatiotemporal mapping of image signal. A processor of LED image signal and FPGAs in LED modules have been reprogrammed so that four adjacent pixels in the input image are converted into successive four fields. The pitch of LED panel is 20 mm. The developed 480-fps LED display is utilized for stereoscopic 3D display by use of parallax barrier. The horizontal resolution of a viewed image decreases to one-half by the parallax barrier. This degradation is critical for LED because the pitch of LED displays is as large as tens of times of other flat panel displays. We have conducted experiments to improve quality of the viewed image through the parallax barrier. The improvement is based on interpolation by afterimages. It is shown that the HFR LED provides detailed afterimages. Furthermore, the HFR LED has been utilized for unconscious imaging, which provide a sensation of discovery of conscious visual information from unconscious images.

  6. Three-Dimensional Stereoscopic Tracking Velocimetry and Experimental/Numerical Comparison of Directional Solidification

    Science.gov (United States)

    Lee, David; Ge, Yi; Cha, Soyoung Stephen; Ramachandran, Narayanan; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    Measurement of three-dimensional (3-D) three-component velocity fields is of great importance in both ground and space experiments for understanding materials processing and fluid physics. The experiments in these fields most likely inhibit the application of conventional planar probes for observing 3-D phenomena. Here, we present the investigation results of stereoscopic tracking velocimetry (STV) for measuring 3-D velocity fields, which include diagnostic technology development, experimental velocity measurement, and comparison with analytical and numerical computation. STV is advantageous in system simplicity for building compact hardware and in software efficiency for continual near-real-time monitoring. It has great freedom in illuminating and observing volumetric fields from arbitrary directions. STV is based on stereoscopic observation of particles-Seeded in a flow by CCD sensors. In the approach, part of the individual particle images that provide data points is likely to be lost or cause errors when their images overlap and crisscross each other especially under a high particle density. In order to maximize the valid recovery of data points, neural networks are implemented for these two important processes. For the step of particle overlap decomposition, the back propagation neural network is utilized because of its ability in pattern recognition with pertinent particle image feature parameters. For the step of particle tracking, the Hopfield neural network is employed to find appropriate particle tracks based on global optimization. Our investigation indicates that the neural networks are very efficient and useful for stereoscopically tracking particles. As an initial assessment of the diagnostic technology performance, laminar water jets with and without pulsation are measured. The jet tip velocity profiles are in good agreement with analytical predictions. Finally, for testing in material processing applications, a simple directional solidification

  7. Development of an indoor positioning and navigation system using monocular SLAM and IMU

    Science.gov (United States)

    Mai, Yu-Ching; Lai, Ying-Chih

    2016-07-01

    The positioning and navigation systems based on Global Positioning System (GPS) have been developed over past decades and have been widely used for outdoor environment. However, high-rise buildings or indoor environments can block the satellite signal. Therefore, many indoor positioning methods have been developed to respond to this issue. In addition to the distance measurements using sonar and laser sensors, this study aims to develop a method by integrating a monocular simultaneous localization and mapping (MonoSLAM) algorithm with an inertial measurement unit (IMU) to build an indoor positioning system. The MonoSLAM algorithm measures the distance (depth) between the image features and the camera. With the help of Extend Kalman Filter (EKF), MonoSLAM can provide real-time position, velocity and camera attitude in world frame. Since the feature points will not always appear and can't be trusted at any time, a wrong estimation of the features will cause the estimated position diverge. To overcome this problem, a multisensor fusion algorithm was applied in this study by using the multi-rate Kalman Filter. Finally, from the experiment results, the proposed system was verified to be able to improve the reliability and accuracy of the MonoSLAM by integrating the IMU measurements.

  8. Real-Time Obstacle Detection Approach using Stereoscopic Images

    Directory of Open Access Journals (Sweden)

    Nadia Baha

    2014-02-01

    Full Text Available In this paper, we propose a new and simple approach to obstacle and free space detection in an indoor and outdoor environment in real-time using stereo vision as sensor. The real-time obstacle detection algorithm uses two dimensional disparity map to detect obstacles in the scene without constructing the ground plane. The proposed approach combines an accumulating and thresholding techniques to detect and cluster obstacle pixels into objects using a dense disparity map. The results from both analysis modules are combined to provide information of the free space. Experimental results are presented to show the effectiveness of the proposed method in real-time.

  9. Design and Manufacture of Innovative Three-Dimensional Stereoscopic Projection Devices%创新三维立体投影装置的设计和制作

    Institute of Scientific and Technical Information of China (English)

    房若宇

    2016-01-01

    Based on the rectangular pyramid model of stereoscopic projection device,in the current work,the specific design and particular manufacture of cone-mode and front-and-back parallel plane-mode stereoscopic projections were explored;and successively by both systems,excellent three-dimensional stereoscopic projection imaging was achieved.These two kinds of devices present the ultimate limits of three-dimensional stereoscopic projections that fabricated on the basic of polygonal pyramids with maximum and minimum panels.The present work is an audacious innovation for the college physical experiments.%受四棱锥立体投影模型的启发,本文探索了圆锥式和前后平行板式立体投影装置的针对性设计和具体制作方法,在此基础上实现了良好的三维立体投影效果。这两种立体投影装置是最多和最少面多棱锥设计的极限,是大学物理实验大胆创新的一个尝试。

  10. Balance and coordination after viewing stereoscopic 3D television.

    Science.gov (United States)

    Read, Jenny C A; Simonotto, Jennifer; Bohr, Iwo; Godfrey, Alan; Galna, Brook; Rochester, Lynn; Smulders, Tom V

    2015-07-01

    Manufacturers and the media have raised the possibility that viewing stereoscopic 3D television (S3D TV) may cause temporary disruption to balance and visuomotor coordination. We looked for evidence of such effects in a laboratory-based study. Four hundred and thirty-three people aged 4-82 years old carried out tests of balance and coordination before and after viewing an 80 min movie in either conventional 2D or stereoscopic 3D, while wearing two triaxial accelerometers. Accelerometry produced little evidence of any change in body motion associated with S3D TV. We found no evidence that viewing the movie in S3D causes a detectable impairment in balance or in visuomotor coordination.

  11. Stereoscopic virtual reality models for planning tumor resection in the sellar region

    Directory of Open Access Journals (Sweden)

    Wang Shou-sen

    2012-11-01

    Full Text Available Abstract Background It is difficult for neurosurgeons to perceive the complex three-dimensional anatomical relationships in the sellar region. Methods To investigate the value of using a virtual reality system for planning resection of sellar region tumors. The study included 60 patients with sellar tumors. All patients underwent computed tomography angiography, MRI-T1W1, and contrast enhanced MRI-T1W1 image sequence scanning. The CT and MRI scanning data were collected and then imported into a Dextroscope imaging workstation, a virtual reality system that allows structures to be viewed stereoscopically. During preoperative assessment, typical images for each patient were chosen and printed out for use by the surgeons as references during surgery. Results All sellar tumor models clearly displayed bone, the internal carotid artery, circle of Willis and its branches, the optic nerve and chiasm, ventricular system, tumor, brain, soft tissue and adjacent structures. Depending on the location of the tumors, we simulated the transmononasal sphenoid sinus approach, transpterional approach, and other approaches. Eleven surgeons who used virtual reality models completed a survey questionnaire. Nine of the participants said that the virtual reality images were superior to other images but that other images needed to be used in combination with the virtual reality images. Conclusions The three-dimensional virtual reality models were helpful for individualized planning of surgery in the sellar region. Virtual reality appears to be promising as a valuable tool for sellar region surgery in the future.

  12. Development of a monocular vision system for robotic drilling

    Institute of Scientific and Technical Information of China (English)

    Wei-dong ZHU; Biao MEI; Guo-rui YAN; Ying-lin KE

    2014-01-01

    Robotic drilling for aerospace structures demands a high positioning accuracy of the robot, which is usually achieved through error measurement and compensation. In this paper, we report the development of a practical monocular vision system for measurement of the relative error between the drill tool center point (TCP) and the reference hole. First, the principle of relative error measurement with the vision system is explained, followed by a detailed discussion on the hardware components, software components, and system integration. The elliptical contour extraction algorithm is presented for accurate and robust reference hole detection. System calibration is of key importance to the measurement accuracy of a vision system. A new method is proposed for the simultaneous calibration of camera internal parameters and hand-eye relationship with a dedicated calibration board. Extensive measurement experiments have been performed on a robotic drilling system. Experimental results show that the measurement accuracy of the developed vision system is higher than 0.15 mm, which meets the requirement of robotic drilling for aircraft structures.

  13. Deep monocular 3D reconstruction for assisted navigation in bronchoscopy.

    Science.gov (United States)

    Visentini-Scarzanella, Marco; Sugiura, Takamasa; Kaneko, Toshimitsu; Koto, Shinichiro

    2017-07-01

    In bronchoschopy, computer vision systems for navigation assistance are an attractive low-cost solution to guide the endoscopist to target peripheral lesions for biopsy and histological analysis. We propose a decoupled deep learning architecture that projects input frames onto the domain of CT renderings, thus allowing offline training from patient-specific CT data. A fully convolutional network architecture is implemented on GPU and tested on a phantom dataset involving 32 video sequences and [Formula: see text]60k frames with aligned ground truth and renderings, which is made available as the first public dataset for bronchoscopy navigation. An average estimated depth accuracy of 1.5 mm was obtained, outperforming conventional direct depth estimation from input frames by 60%, and with a computational time of [Formula: see text]30 ms on modern GPUs. Qualitatively, the estimated depth and renderings closely resemble the ground truth. The proposed method shows a novel architecture to perform real-time monocular depth estimation without losing patient specificity in bronchoscopy. Future work will include integration within SLAM systems and collection of in vivo datasets.

  14. Monocular visual scene understanding: understanding multi-object traffic scenes.

    Science.gov (United States)

    Wojek, Christian; Walk, Stefan; Roth, Stefan; Schindler, Konrad; Schiele, Bernt

    2013-04-01

    Following recent advances in detection, context modeling, and tracking, scene understanding has been the focus of renewed interest in computer vision research. This paper presents a novel probabilistic 3D scene model that integrates state-of-the-art multiclass object detection, object tracking and scene labeling together with geometric 3D reasoning. Our model is able to represent complex object interactions such as inter-object occlusion, physical exclusion between objects, and geometric context. Inference in this model allows us to jointly recover the 3D scene context and perform 3D multi-object tracking from a mobile observer, for objects of multiple categories, using only monocular video as input. Contrary to many other approaches, our system performs explicit occlusion reasoning and is therefore capable of tracking objects that are partially occluded for extended periods of time, or objects that have never been observed to their full extent. In addition, we show that a joint scene tracklet model for the evidence collected over multiple frames substantially improves performance. The approach is evaluated for different types of challenging onboard sequences. We first show a substantial improvement to the state of the art in 3D multipeople tracking. Moreover, a similar performance gain is achieved for multiclass 3D tracking of cars and trucks on a challenging dataset.

  15. Mobile Robot Hierarchical Simultaneous Localization and Mapping Using Monocular Vision

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A hierarchical mobile robot simultaneous localization and mapping (SLAM) method that allows us to obtain accurate maps was presented. The local map level is composed of a set of local metric feature maps that are guaranteed to be statistically independent. The global level is a topological graph whose arcs are labeled with the relative location between local maps. An estimation of these relative locations is maintained with local map alignment algorithm, and more accurate estimation is calculated through a global minimization procedure using the loop closure constraint. The local map is built with Rao-Blackwellised particle filter (RBPF), where the particle filter is used to extending the path posterior by sampling new poses. The landmark position estimation and update is implemented through extended Kalman filter (EKF). Monocular vision mounted on the robot tracks the 3D natural point landmarks, which are structured with matching scale invariant feature transform (SIFT) feature pairs. The matching for multi-dimension SIFT features is implemented with a KD-tree in the time cost of O(lbN). Experiment results on Pioneer mobile robot in a real indoor environment show the superior performance of our proposed method.

  16. Surgical outcome in monocular elevation deficit: A retrospective interventional study

    Directory of Open Access Journals (Sweden)

    Bandyopadhyay Rakhi

    2008-01-01

    Full Text Available Background and Aim: Monocular elevation deficiency (MED is characterized by a unilateral defect in elevation, caused by paretic, restrictive or combined etiology. Treatment of this multifactorial entity is therefore varied. In this study, we performed different surgical procedures in patients of MED and evaluated their outcome, based on ocular alignment, improvement in elevation and binocular functions. Study Design: Retrospective interventional study. Materials and Methods: Twenty-eight patients were included in this study, from June 2003 to August 2006. Five patients underwent Knapp procedure, with or without horizontal squint surgery, 17 patients had inferior rectus recession, with or without horizontal squint surgery, three patients had combined inferior rectus recession and Knapp procedure and three patients had inferior rectus recession combined with contralateral superior rectus or inferior oblique surgery. The choice of procedure was based on the results of forced duction test (FDT. Results: Forced duction test was positive in 23 cases (82%. Twenty-four of 28 patients (86% were aligned to within 10 prism diopters. Elevation improved in 10 patients (36% from no elevation above primary position (-4 to only slight limitation of elevation (-1. Five patients had preoperative binocular vision and none gained it postoperatively. No significant postoperative complications or duction abnormalities were observed during the follow-up period. Conclusion: Management of MED depends upon selection of the correct surgical technique based on employing the results of FDT, for a satisfactory outcome.

  17. Pose Estimation and Segmentation of Multiple People in Stereoscopic Movies.

    Science.gov (United States)

    Seguin, Guillaume; Alahari, Karteek; Sivic, Josef; Laptev, Ivan

    2015-08-01

    We describe a method to obtain a pixel-wise segmentation and pose estimation of multiple people in stereoscopic videos. This task involves challenges such as dealing with unconstrained stereoscopic video, non-stationary cameras, and complex indoor and outdoor dynamic scenes with multiple people. We cast the problem as a discrete labelling task involving multiple person labels, devise a suitable cost function, and optimize it efficiently. The contributions of our work are two-fold: First, we develop a segmentation model incorporating person detections and learnt articulated pose segmentation masks, as well as colour, motion, and stereo disparity cues. The model also explicitly represents depth ordering and occlusion. Second, we introduce a stereoscopic dataset with frames extracted from feature-length movies "StreetDance 3D" and "Pina". The dataset contains 587 annotated human poses, 1,158 bounding box annotations and 686 pixel-wise segmentations of people. The dataset is composed of indoor and outdoor scenes depicting multiple people with frequent occlusions. We demonstrate results on our new challenging dataset, as well as on the H2view dataset from (Sheasby et al. ACCV 2012).

  18. Stereoscopic Projection in the Chemistry Classroom

    Science.gov (United States)

    McGrew, LeRoy A.

    1972-01-01

    Describes the development of a three-dimensional projection system used to present structural principles by means of slides. Polarization of images from two planar projectors and viewing through polarized lenses gives stereo results. Techniques used in producing the slides and constructing the equipment are given. (TS)

  19. Towards Reliable Stereoscopic 3D Quality Evaluation: Subjective Assessment and Objective Metrics

    OpenAIRE

    Xing, Liyuan

    2013-01-01

    Stereoscopic three-dimensional (3D) services have become more popular recently amid promise of providing immersive quality of experience (QoE) to the end-users with the help of binocular depth. However, various arisen artifacts in the stereoscopic 3D processing chain might cause discomfort and severely degrade the QoE. Unfortunately, although the causes and nature of artifacts have already been clearly understood, it is impossible to eliminate them under the limitation of current stereoscopic...

  20. Stereoscopic PIV measurements of flow in the nasal cavity with high flow therapy

    Science.gov (United States)

    Spence, C. J. T.; Buchmann, N. A.; Jermy, M. C.; Moore, S. M.

    2011-04-01

    Knowledge of the airflow characteristics within the nasal cavity with nasal high flow (NHF) therapy and during unassisted breathing is essential to understand the treatment's efficacy. The distribution and velocity of the airflow in the nasal cavity with and without NHF cannula flow has been investigated using stereoscopic particle image velocimetry at steady peak expiration and inspiration. In vivo breathing flows were measured and dimensionally scaled to reproduce physiological conditions in vitro. A scaled model of the complete nasal cavity was constructed in transparent silicone and airflow simulated with an aqueous glycerine solution. NHF modifies nasal cavity flow patterns significantly, altering the proportion of inspiration and expiration through each passageway and producing jets with in vivo velocities up to 17.0 ms-1 for 30 l/min cannula flow. Velocity magnitudes differed appreciably between the left and right sides of the nasal cavity. The importance of using a three-component measurement technique when investigating nasal flows has been highlighted.

  1. Dichoptic training in adults with amblyopia: Additional stereoacuity gains over monocular training.

    Science.gov (United States)

    Liu, Xiang-Yun; Zhang, Jun-Yun

    2017-08-04

    Dichoptic training is a recent focus of research on perceptual learning in adults with amblyopia, but whether and how dichoptic training is superior to traditional monocular training is unclear. Here we investigated whether dichoptic training could further boost visual acuity and stereoacuity in monocularly well-trained adult amblyopic participants. During dichoptic training the participants used the amblyopic eye to practice a contrast discrimination task, while a band-filtered noise masker was simultaneously presented in the non-amblyopic fellow eye. Dichoptic learning was indexed by the increase of maximal tolerable noise contrast for successful contrast discrimination in the amblyopic eye. The results showed that practice tripled maximal tolerable noise contrast in 13 monocularly well-trained amblyopic participants. Moreover, the training further improved stereoacuity by 27% beyond the 55% gain from previous monocular training, but unchanged visual acuity of the amblyopic eyes. Therefore our dichoptic training method may produce extra gains of stereoacuity, but not visual acuity, in adults with amblyopia after monocular training. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. MONOCULAR AND BINOCULAR VISION IN THE PERFORMANCE OF A COMPLEX SKILL

    Directory of Open Access Journals (Sweden)

    Thomas Heinen

    2011-09-01

    Full Text Available The goal of this study was to investigate the role of binocular and monocular vision in 16 gymnasts as they perform a handspring on vault. In particular we reasoned, if binocular visual information is eliminated while experts and apprentices perform a handspring on vault, and their performance level changes or is maintained, then such information must or must not be necessary for their best performance. If the elimination of binocular vision leads to differences in gaze behavior in either experts or apprentices, this would answer the question of an adaptive gaze behavior, and thus if this is a function of expertise level or not. Gaze behavior was measured using a portable and wireless eye-tracking system in combination with a movement-analysis system. Results revealed that gaze behavior differed between experts and apprentices in the binocular and monocular conditions. In particular, apprentices showed less fixations of longer duration in the monocular condition as compared to experts and the binocular condition. Apprentices showed longer blink duration than experts in both, the monocular and binocular conditions. Eliminating binocular vision led to a shorter repulsion phase and a longer second flight phase in apprentices. Experts exhibited no differences in phase durations between binocular and monocular conditions. Findings suggest, that experts may not rely on binocular vision when performing handsprings, and movement performance maybe influenced in apprentices when eliminating binocular vision. We conclude that knowledge about gaze-movement relationships may be beneficial for coaches when teaching the handspring on vault in gymnastics

  3. Discrimination of rotated-in-depth curves is facilitated by stereoscopic cues, but curvature is not tuned for stereoscopic rotation-in-depth.

    Science.gov (United States)

    Bell, Jason; Kanji, Jameel; Kingdom, Frederick A A

    2013-01-25

    Object recognition suffers when objects are rotated-in-depth, as for example with changes to viewing angle. However the loss of recognition can be mitigated by stereoscopic cues, suggesting that object coding is not strictly two-dimensional. Here we consider whether the encoding of rotation-in-depth (RID) of a simple curve is tuned for stereoscopic depth. Experiment 1 first determined that test subjects were sensitive to changes in stereoscopic RID, by showing that stereoscopic cues improved the discrimination of RID when other spatial cues to RID were ineffective. Experiment 2 tested directly whether curvature-sensitive mechanisms were selective for stereoscopic RID. Curvature after-effects were measured for unrotated test curves following adaptation to various RID adaptors. Although strong adaptation tuning for RID angle was found, tuning was identical for stereo and non-stereo adaptors. These findings show that while stereoscopic cues can facilitate three-dimensional curvature discrimination, curvature-sensitive mechanisms are not tuned for stereoscopic RID.

  4. Patterns of non-embolic transient monocular visual field loss.

    Science.gov (United States)

    Petzold, Axel; Islam, Niaz; Plant, G T

    2013-07-01

    The aim of this study was to systematically describe the semiology of non-embolic transient monocular visual field loss (neTMVL). We conducted a retrospective case note analysis of patients from Moorfields Eye Hospital (1995-2007). The variables analysed were age, age of onset, gender, past medical history or family history of migraine, eye affected, onset, duration and offset, perception (pattern, positive and negative symptoms), associated headache and autonomic symptoms, attack frequency, and treatment response to nifedipine. We identified 77 patients (28 male and 49 female). Mean age of onset was 37 years (range 14-77 years). The neTMVL was limited to the right eye in 36 % to the left in 47 % and occurred independently in either eye in 5 % of cases. A past medical history of migraine was present in 12 % and a family history in 8 %. Headache followed neTMVL in 14 % and was associated with autonomic features in 3 %. The neTMB was perceived as grey in 35 %, white in 21 %, black in 16 % and as phosphenes in 9 %. Most frequently neTMVL was patchy 20 %. Recovery of vision frequently resembled attack onset in reverse. In 3 patients without associated headache the loss of vision was permanent. Treatment with nifedipine was initiated in 13 patients with an attack frequency of more than one per week and reduced the attack frequency in all. In conclusion, this large series of patients with neTMVL permits classification into five types of reversible visual field loss (grey, white, black, phosphenes, patchy). Treatment response to nifidipine suggests some attacks to be caused by vasospasm.

  5. Auto convergence for stereoscopic 3D cameras

    Science.gov (United States)

    Zhang, Buyue; Kothandaraman, Sreenivas; Batur, Aziz Umit

    2012-03-01

    Viewing comfort is an important concern for 3-D capable consumer electronics such as 3-D cameras and TVs. Consumer generated content is typically viewed at a close distance which makes the vergence-accommodation conflict particularly pronounced, causing discomfort and eye fatigue. In this paper, we present a Stereo Auto Convergence (SAC) algorithm for consumer 3-D cameras that reduces the vergence-accommodation conflict on the 3-D display by adjusting the depth of the scene automatically. Our algorithm processes stereo video in realtime and shifts each stereo frame horizontally by an appropriate amount to converge on the chosen object in that frame. The algorithm starts by estimating disparities between the left and right image pairs using correlations of the vertical projections of the image data. The estimated disparities are then analyzed by the algorithm to select a point of convergence. The current and target disparities of the chosen convergence point determines how much horizontal shift is needed. A disparity safety check is then performed to determine whether or not the maximum and minimum disparity limits would be exceeded after auto convergence. If the limits would be exceeded, further adjustments are made to satisfy the safety limits. Finally, desired convergence is achieved by shifting the left and the right frames accordingly. Our algorithm runs real-time at 30 fps on a TI OMAP4 processor. It is tested using an OMAP4 embedded prototype stereo 3-D camera. It significantly improves 3-D viewing comfort.

  6. The contribution of monocular depth cues to scene perception by pigeons.

    Science.gov (United States)

    Cavoto, Brian R; Cook, Robert G

    2006-07-01

    The contributions of different monocular depth cues to performance of a scene perception task were investigated in 4 pigeons. They discriminated the sequential depth ordering of three geometric objects in computer-rendered scenes. The orderings of these objects were specified by the combined presence or absence of the pictorial cues of relative density, occlusion, and relative size. In Phase 1, the pigeons learned the task as a direct function of the number of cues present. The three monocular cues contributed equally to the discrimination. Phase 2 established that differential shading on the objects provided an additional discriminative cue. These results suggest that the pigeon visual system is sensitive to many of the same monocular depth cues that are known to be used by humans. The theoretical implications for a comparative psychology of picture processing are considered.

  7. Refractive error and monocular viewing strengthen the hollow-face illusion.

    Science.gov (United States)

    Hill, Harold; Palmisano, Stephen; Matthews, Harold

    2012-01-01

    We measured the strength of the hollow-face illusion--the 'flipping distance' at which perception changes between convex and concave--as a function of a lens-induced 3 dioptre refractive error and monocular/binocular viewing. Refractive error and closing one eye both strengthened the illusion to approximately the same extent. The illusion was weakest viewed binocularly without refractive error and strongest viewed monocularly with it. This suggests binocular cues disambiguate the illusion at greater distances than monocular cues, but that both are disrupted by refractive error. We argue that refractive error leaves the ambiguous low-spatial-frequency shading information critical to the illusion largely unaffected while disrupting other, potentially disambiguating, depth/distance cues.

  8. Eye movements in chameleons are not truly independent - evidence from simultaneous monocular tracking of two targets.

    Science.gov (United States)

    Katz, Hadas Ketter; Lustig, Avichai; Lev-Ari, Tidhar; Nov, Yuval; Rivlin, Ehud; Katzir, Gadi

    2015-07-01

    Chameleons perform large-amplitude eye movements that are frequently referred to as independent, or disconjugate. When prey (an insect) is detected, the chameleon's eyes converge to view it binocularly and 'lock' in their sockets so that subsequent visual tracking is by head movements. However, the extent of the eyes' independence is unclear. For example, can a chameleon visually track two small targets simultaneously and monocularly, i.e. one with each eye? This is of special interest because eye movements in ectotherms and birds are frequently independent, with optic nerves that are fully decussated and intertectal connections that are not as developed as in mammals. Here, we demonstrate that chameleons presented with two small targets moving in opposite directions can perform simultaneous, smooth, monocular, visual tracking. To our knowledge, this is the first demonstration of such a capacity. The fine patterns of the eye movements in monocular tracking were composed of alternating, longer, 'smooth' phases and abrupt 'step' events, similar to smooth pursuits and saccades. Monocular tracking differed significantly from binocular tracking with respect to both 'smooth' phases and 'step' events. We suggest that in chameleons, eye movements are not simply 'independent'. Rather, at the gross level, eye movements are (i) disconjugate during scanning, (ii) conjugate during binocular tracking and (iii) disconjugate, but coordinated, during monocular tracking. At the fine level, eye movements are disconjugate in all cases. These results support the view that in vertebrates, basic monocular control is under a higher level of regulation that dictates the eyes' level of coordination according to context. © 2015. Published by The Company of Biologists Ltd.

  9. Elimination of aniseikonia in monocular aphakia with a contact lens-spectacle combination.

    Science.gov (United States)

    Schechter, R J

    1978-01-01

    Correction of monocular aphakia with contact lenses generally results in aniseikonia in the range of 7--9%; with correction by intraocular lenses, aniseikonia is approximately 2%. We present a new method of correcting aniseikonia in monocular aphakics using a contact lens-spectacle combination. A formula is derived wherein the contact lens is deliberately overcorrected; this overcorrection is then neutralized by the appropriate spectacle lens, to be worn over the contact lens. Calculated results with this system over a wide range of possible situations consistently results in an aniseikonia of 0.1%.

  10. END-TO-END DEPTH FROM MOTION WITH STABILIZED MONOCULAR VIDEOS

    Directory of Open Access Journals (Sweden)

    C. Pinard

    2017-08-01

    Full Text Available We propose a depth map inference system from monocular videos based on a novel dataset for navigation that mimics aerial footage from gimbal stabilized monocular camera in rigid scenes. Unlike most navigation datasets, the lack of rotation implies an easier structure from motion problem which can be leveraged for different kinds of tasks such as depth inference and obstacle avoidance. We also propose an architecture for end-to-end depth inference with a fully convolutional network. Results show that although tied to camera inner parameters, the problem is locally solvable and leads to good quality depth prediction.

  11. Development of monocular and binocular multi-focus 3D display systems using LEDs

    Science.gov (United States)

    Kim, Sung-Kyu; Kim, Dong-Wook; Son, Jung-Young; Kwon, Yong-Moo

    2008-04-01

    Multi-focus 3D display systems are developed and a possibility about satisfaction of eye accommodation is tested. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving the multi-focus function, we developed 3D display systems for one eye and both eyes, which can satisfy accommodation to displayed virtual objects within defined depth. The monocular accommodation and the binocular convergence 3D effect of the system are tested and a proof of the satisfaction of the accommodation and experimental result of the binocular 3D fusion are given as results by using the proposed 3D display systems.

  12. The compatibility of consumer DLP projectors with time-sequential stereoscopic 3D visualisation

    Science.gov (United States)

    Woods, Andrew J.; Rourke, Tegan

    2007-02-01

    A range of advertised "Stereo-Ready" DLP projectors are now available in the market which allow high-quality flickerfree stereoscopic 3D visualization using the time-sequential stereoscopic display method. The ability to use a single projector for stereoscopic viewing offers a range of advantages, including extremely good stereoscopic alignment, and in some cases, portability. It has also recently become known that some consumer DLP projectors can be used for timesequential stereoscopic visualization, however it was not well understood which projectors are compatible and incompatible, what display modes (frequency and resolution) are compatible, and what stereoscopic display quality attributes are important. We conducted a study to test a wide range of projectors for stereoscopic compatibility. This paper reports on the testing of 45 consumer DLP projectors of widely different specifications (brand, resolution, brightness, etc). The projectors were tested for stereoscopic compatibility with various video formats (PAL, NTSC, 480P, 576P, and various VGA resolutions) and video input connections (composite, SVideo, component, and VGA). Fifteen projectors were found to work well at up to 85Hz stereo in VGA mode. Twenty three projectors would work at 60Hz stereo in VGA mode.

  13. "Convergent observations" with the stereoscopic HEGRA CT system

    CERN Document Server

    Lampeitl, H; Lampeitl, Hubert; Hofmann, Werner

    1999-01-01

    Observations of air showers with the stereoscopic HEGRA IACT system are usually carried out in a mode where all telescopes point in the same direction. Alternatively, one could take into account the finite distance to the shower maximum and orient the telescopes such that their optical axes intersect at the average height of the shower maximum. In this paper we show that this ``convergent observation mode'' is advantageous for the observation of extended sources and for surveys, based on a small data set taken with the HEGRA telescopes operated in this mode.

  14. Evaluation Method and System of Stereoscopic Projection Quality%立体投影质量的评价方法及系统

    Institute of Scientific and Technical Information of China (English)

    李艳; 苏萍; 马建设; 毛乐山

    2012-01-01

    For polarized stereoscopic projector,the influence of two main performance parameters, which are the inconsistency in color and brightness of the left and right optical paths and stereo crosstalk, on stereoscopic projection quality is studied. The stereoscopic projection quality will be influenced by a number of factors, which are two projection images not strictly coincident on the screen, two optical paths' difference in color and brightness, stereo crosstalk caused by the polarized glasses, etc. The evaluation method is presented to evaluate the stereoscopic projection quality according to those factors and the evaluation system is built. The system obtains projection images by a digital camera, those parameters were analyzed and calculated by OpenCV, and then were compared with the presented maximum deviation value, the comparison result were sent to display and feedback signal. The system function is verified by using a stereoscopic projector. The result indicates that the evaluation system can measure and feedback performance parameters of stereoscopic projector effectively, and help to analyze the stereoscopic projection quality of the polarized stereoscopic projector comprehensively, thus enhance the stereoscopic projection quality.%针对偏振式立体投影仪,研究了其左右光路不一致性及立体串扰度对立体投影质量的影响.屏幕上投影图像的不严格重合、两路光路在色度亮度上的差异以及偏振眼镜造成的立体串扰等都会影响最终的立体投影质量.提出用上述参数评价偏振式立体投影仪的立体投影质量并完成了评价系统的搭建.通过数字摄像头获取立体投影设备在屏幕上相应的投影图像,采用OpenCV等算法计算及分析了上述参数值,并将该值与预存的最大偏差值比较后进行显示及反馈.在一台偏振式立体投影仪上进行了系统验证,结果表明该系统能够有效地对立体投影仪性能参数进行测量

  15. Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs

    Science.gov (United States)

    Al-Kaff, Abdulla; García, Fernando; Martín, David; De La Escalera, Arturo; Armingol, José María

    2017-01-01

    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works. PMID:28481277

  16. Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs.

    Science.gov (United States)

    Al-Kaff, Abdulla; García, Fernando; Martín, David; De La Escalera, Arturo; Armingol, José María

    2017-05-07

    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works.

  17. Surgical approaches to complex vascular lesions: the use of virtual reality and stereoscopic analysis as a tool for resident and student education.

    Science.gov (United States)

    Agarwal, Nitin; Schmitt, Paul J; Sukul, Vishad; Prestigiacomo, Charles J

    2012-08-01

    Virtual reality training for complex tasks has been shown to be of benefit in fields involving highly technical and demanding skill sets. The use of a stereoscopic three-dimensional (3D) virtual reality environment to teach a patient-specific analysis of the microsurgical treatment modalities of a complex basilar aneurysm is presented. Three different surgical approaches were evaluated in a virtual environment and then compared to elucidate the best surgical approach. These approaches were assessed with regard to the line-of-sight, skull base anatomy and visualisation of the relevant anatomy at the level of the basilar artery and surrounding structures. Overall, the stereoscopic 3D virtual reality environment with fusion of multimodality imaging affords an excellent teaching tool for residents and medical students to learn surgical approaches to vascular lesions. Future studies will assess the educational benefits of this modality and develop a series of metrics for student assessments.

  18. Perceptual asymmetry reveals neural substrates underlying stereoscopic transparency.

    Science.gov (United States)

    Tsirlin, Inna; Allison, Robert S; Wilcox, Laurie M

    2012-02-01

    We describe a perceptual asymmetry found in stereoscopic perception of overlaid random-dot surfaces. Specifically, the minimum separation in depth needed to perceptually segregate two overlaid surfaces depended on the distribution of dots across the surfaces. With the total dot density fixed, significantly larger inter-plane disparities were required for perceptual segregation of the surfaces when the front surface had fewer dots than the back surface compared to when the back surface was the one with fewer dots. We propose that our results reflect an asymmetry in the signal strength of the front and back surfaces due to the assignment of the spaces between the dots to the back surface by disparity interpolation. This hypothesis was supported by the results of two experiments designed to reduce the imbalance in the neuronal response to the two surfaces. We modeled the psychophysical data with a network of inter-neural connections: excitatory within-disparity and inhibitory across disparity, where the spread of disparity was modulated according to figure-ground assignment. These psychophysical and computational findings suggest that stereoscopic transparency depends on both inter-neural interactions of disparity-tuned cells and higher-level processes governing figure ground segregation.

  19. Perception of Acceleration in Motion-In-Depth With Only Monocular and Binocular Information

    Directory of Open Access Journals (Sweden)

    Santiago Estaún

    2003-01-01

    Full Text Available Percepción de la aceleración en el movimiento en profundidad con información monocular y con información monocular y binocular. En muchas ocasiones es necesario adecuar nuestras acciones a objetos que cambian su aceleración. Sin embargo, no se ha encontrado evidencia de una percepción directa de la aceleración. En su lugar, parece ser que somos capaces de detectar cambios de velocidad en el movimiento 2-D dentro de una ventana temporal. Además, resultados recientes sugieren que el movimiento en profundidad se detecta a través de cambios de posición. Por lo tanto, para detectar aceleración en profundidad sería necesario que el sistema visual lleve a cabo algun tipo de cómputo de segundo orden. En dos experimentos, mostramos que los observadores no perciben la aceleración en trayectorias de aproximación, al menos en los rangos que utilizados [600- 800 ms] dando como resultado una sobreestimación del tiempo de llegada. Independientemente de la condición de visibilidad (sólo monocular o monocular más binocular, la respuesta se ajusta a una estrategia de velocidad constante. No obstante, la sobreestimación se reduce cuando la información binocular está disponible.

  20. Depth of Monocular Elements in a Binocular Scene: The Conditions for da Vinci Stereopsis

    Science.gov (United States)

    Cook, Michael; Gillam, Barbara

    2004-01-01

    Quantitative depth based on binocular resolution of visibility constraints is demonstrated in a novel stereogram representing an object, visible to 1 eye only, and seen through an aperture or camouflaged against a background. The monocular region in the display is attached to the binocular region, so that the stereogram represents an object which…

  1. Depth of Monocular Elements in a Binocular Scene: The Conditions for da Vinci Stereopsis

    Science.gov (United States)

    Cook, Michael; Gillam, Barbara

    2004-01-01

    Quantitative depth based on binocular resolution of visibility constraints is demonstrated in a novel stereogram representing an object, visible to 1 eye only, and seen through an aperture or camouflaged against a background. The monocular region in the display is attached to the binocular region, so that the stereogram represents an object which…

  2. LASIK monocular en pacientes adultos con ambliopía por anisometropía

    Directory of Open Access Journals (Sweden)

    Alejandro Tamez-Peña

    2017-09-01

    Conclusiones: La cirugía refractiva monocular en pacientes con ambliopía por anisometropía es una opción terapéutica segura y efectiva que ofrece resultados visuales satisfactorios, preservando o incluso mejorando la AVMC preoperatoria.

  3. Depth scaling in phantom and monocular gap stereograms using absolute distance information.

    Science.gov (United States)

    Kuroki, Daiichiro; Nakamizo, Sachio

    2006-11-01

    The present study aimed to investigate whether the visual system scales apparent depth from binocularly unmatched features by using absolute distance information. In Experiment 1 we examined the effect of convergence on perceived depth in phantom stereograms [Gillam, B., & Nakayama, K. (1999). Quantitative depth for a phantom surface can be based on cyclopean occlusion cues alone. Vision Research, 39, 109-112.], monocular gap stereograms [Pianta, M. J., & Gillam, B. J. (2003a). Monocular gap stereopsis: manipulation of the outer edge disparity and the shape of the gap. Vision Research, 43, 1937-1950.] and random dot stereograms. In Experiments 2 and 3 we examined the effective range of viewing distances for scaling the apparent depths in these stereograms. The results showed that: (a) the magnitudes of perceived depths increased in all stereograms as the estimate of the viewing distance increased while keeping proximal and/or distal sizes of the stimuli constant, and (b) the effective range of viewing distances was significantly shorter in monocular gap stereograms. The first result indicates that the visual system scales apparent depth from unmatched features as well as that from horizontal disparity, while the second suggests that, at far distances, the strength of the depth signal from an unmatched feature in monocular gap stereograms is relatively weaker than that from horizontal disparity.

  4. Ground moving target geo-location from monocular camera mounted on a micro air vehicle

    Science.gov (United States)

    Guo, Li; Ang, Haisong; Zheng, Xiangming

    2011-08-01

    The usual approaches to unmanned air vehicle(UAV)-to-ground target geo-location impose some severe constraints to the system, such as stationary objects, accurate geo-reference terrain database, or ground plane assumption. Micro air vehicle(MAV) works with characteristics including low altitude flight, limited payload and onboard sensors' low accuracy. According to these characteristics, a method is developed to determine the location of ground moving target which imaged from the air using monocular camera equipped on MAV. This method eliminates the requirements for terrain database (elevation maps) and altimeters that can provide MAV's and target's altitude. Instead, the proposed method only requires MAV flight status provided by its inherent onboard navigation system which includes inertial measurement unit(IMU) and global position system(GPS). The key is to get accurate information on the altitude of the ground moving target. First, Optical flow method extracts background static feature points. Setting a local region around the target in the current image, The features which are on the same plane with the target in this region are extracted, and are retained as aided features. Then, inverse-velocity method calculates the location of these points by integrated with aircraft status. The altitude of object, which is calculated by using position information of these aided features, combining with aircraft status and image coordinates, geo-locate the target. Meanwhile, a framework with Bayesian estimator is employed to eliminate noise caused by camera, IMU and GPS. Firstly, an extended Kalman filter(EKF) provides a simultaneous localization and mapping solution for the estimation of aircraft states and aided features location which defines the moving target local environment. Secondly, an unscented transformation(UT) method determines the estimated mean and covariance of target location from aircraft states and aided features location, and then exports them for the

  5. A vertical parallax reduction method for stereoscopic video based on adaptive interpolation

    Science.gov (United States)

    Li, Qingyu; Zhao, Yan

    2016-10-01

    The existence of vertical parallax is the main factor of affecting the viewing comfort of stereo video. Visual fatigue is gaining widespread attention with the booming development of 3D stereoscopic video technology. In order to reduce the vertical parallax without affecting the horizontal parallax, a self-adaptive image scaling algorithm is proposed, which can use the edge characteristics efficiently. In the meantime, the nonlinear Levenberg-Marquardt (L-M) algorithm is introduced in this paper to improve the accuracy of the transformation matrix. Firstly, the self-adaptive scaling algorithm is used for the original image interpolation. When the pixel point of original image is in the edge areas, the interpretation is implemented adaptively along the edge direction obtained by Sobel operator. Secondly the SIFT algorithm, which is invariant to scaling, rotation and affine transformation, is used to detect the feature matching points from the binocular images. Then according to the coordinate position of matching points, the transformation matrix, which can reduce the vertical parallax, is calculated using Levenberg-Marquardt algorithm. Finally, the transformation matrix is applied to target image to calculate the new coordinate position of each pixel from the view image. The experimental results show that: comparing with the method which reduces the vertical parallax using linear algorithm to calculate two-dimensional projective transformation, the proposed method improves the vertical parallax reduction obviously. At the same time, in terms of the impact on horizontal parallax, the proposed method has more similar horizontal parallax to that of the original image after vertical parallax reduction. Therefore, the proposed method can optimize the vertical parallax reduction.

  6. Measurement and research of optical characteristics of auto-stereoscopic display%自由立体显示器的光学性能测量与研究

    Institute of Scientific and Technical Information of China (English)

    王丛; 金杰; 沈丽丽

    2016-01-01

    为了评价自由立体显示器的光学性能,以八视点狭缝光栅自由立体显示器为例,针对不同的视点图像,测量了不同视角下的亮度值与色度值,研究自由立体显示器的光学性能,其光学性能包括自由立体显示器的最佳观看位置、串扰、可视范围、亮度均匀性、色度均匀性和莫尔条纹。结果表明,这种测量及分析方法实现了对多视点立体显示器光学性能的定量评估,增强了评估立体显示器性能的客观性,对自由立体显示器的设计具有指导意义。%In order to evaluate optical characteristics of auto-stereoscopic display , taking 8 viewpoint parallax barrier auto-stereoscopic display as an example , optical characteristics of auto-stereoscopic display of different viewpoint images were studied and the values of brightness and chromaticity at different visual angles were measured .Optical characteristics included optimal viewing position, crosstalk, viewing range, brightness uniformity, chromaticity uniformity and Moiréfringes.The obtained data samples were analyzed .The results show that the method of measurement and analysis can evaluate the optical characteristics of multi-viewpoint stereoscopic display quantitatively and enhance the objectivity about the performance assessment of stereoscopic display.The study has an instructive effect on auto-stereoscopic display design .

  7. Potential hazards of viewing 3-D stereoscopic television, cinema and computer games: a review.

    Science.gov (United States)

    Howarth, Peter A

    2011-03-01

    The visual stimulus provided by a 3-D stereoscopic display differs from that of the real world because the image provided to each eye is produced on a flat surface. The distance from the screen to the eye remains fixed, providing a single focal distance, but the introduction of disparity between the images allows objects to be located geometrically in front of, or behind, the screen. Unlike in the real world, the stimulus to accommodation and the stimulus to convergence do not match. Although this mismatch is used positively in some forms of Orthoptic treatment, a number of authors have suggested that it could negatively lead to the development of asthenopic symptoms. From knowledge of the zone of clear, comfortable, single binocular vision one can predict that, for people with normal binocular vision, adverse symptoms will not be present if the discrepancy is small, but are likely if it is large, and that what constitutes 'large' and 'small' are idiosyncratic to the individual. The accommodation-convergence mismatch is not, however, the only difference between the natural and the artificial stimuli. In the former case, an object located in front of, or behind, a fixated object will not only be perceived as double if the images fall outside Panum's fusional areas, but it will also be defocused and blurred. In the latter case, however, it is usual for the producers of cinema, TV or computer game content to provide an image that is in focus over the whole of the display, and as a consequence diplopic images will be sharply in focus. The size of Panum's fusional area is spatial frequency-dependent, and because of this the high spatial frequencies present in the diplopic 3-D image will provide a different stimulus to the fusion system from that found naturally. © 2011 The College of Optometrists.

  8. A pilot study on pupillary and cardiovascular changes induced by stereoscopic video movies

    Directory of Open Access Journals (Sweden)

    Sugita Norihiro

    2007-10-01

    Full Text Available Abstract Background Taking advantage of developed image technology, it is expected that image presentation would be utilized to promote health in the field of medical care and public health. To accumulate knowledge on biomedical effects induced by image presentation, an essential prerequisite for these purposes, studies on autonomic responses in more than one physiological system would be necessary. In this study, changes in parameters of the pupillary light reflex and cardiovascular reflex evoked by motion pictures were examined, which would be utilized to evaluate the effects of images, and to avoid side effects. Methods Three stereoscopic video movies with different properties were field-sequentially rear-projected through two LCD projectors on an 80-inch screen. Seven healthy young subjects watched movies in a dark room. Pupillary parameters were measured before and after presentation of movies by an infrared pupillometer. ECG and radial blood pressure were continuously monitored. The maximum cross-correlation coefficient between heart rate and blood pressure, ρmax, was used as an index to evaluate changes in the cardiovascular reflex. Results Parameters of pupillary and cardiovascular reflexes changed differently after subjects watched three different video movies. Amplitudes of the pupillary light reflex, CR, increased when subjects watched two CG movies (movies A and D, while they did not change after watching a movie with the real scenery (movie R. The ρmax was significantly larger after presentation of the movie D. Scores of the questionnaire for subjective evaluation of physical condition increased after presentation of all movies, but their relationship with changes in CR and ρmax was different in three movies. Possible causes of these biomedical differences are discussed. Conclusion The autonomic responses were effective to monitor biomedical effects induced by image presentation. Further accumulation of data on multiple autonomic

  9. Efficient Stereoscopic Video Matching and Map Reconstruction for a Wheeled Mobile Robot

    Directory of Open Access Journals (Sweden)

    Oscar Montiel-Ross

    2012-10-01

    Full Text Available This paper presents a novel method to achieve stereoscopic vision for mobile robot (MR navigation with the advantage of not needing camera calibration for depth (distance estimation measurements. It uses the concept of the adaptive candidate matching window for stereoscopic correspondence for block matching, resulting in improvements in efficiency and accuracy. An average of 40% of time reduction in the calculation process is obtained. All the algorithms for navigation, including the stereoscopic vision module, were implemented using an original computer architecture for the Virtex 5 FPGA, where a distributed multicore processor system was embedded and coordinated using the Message Passing Interface.

  10. STEREO/SECCHI Stereoscopic Observations Constraining the Initiation of Polar Coronal Jets

    CERN Document Server

    Patsourakos, S; Vourlidas, A; Antiochos, S K; Wuelser, J P

    2008-01-01

    We report on the first stereoscopic observations of polar coronal jets made by the EUVI/SECCHI imagers on board the twin STEREO spacecraft. The significantly separated viewpoints ($\\sim$ 11$^\\circ$) allowed us to infer the 3D dynamics and morphology of a well-defined EUV coronal jet for the first time. Triangulations of the jet's location in simultaneous image pairs led to the true 3D position and thereby its kinematics. Initially the jet ascends slowly at $\\approx$10-20 $\\mathrm{{km} {s}^{-1}}$ and then, after an apparent 'jump' takes place, it accelerates impulsively to velocities exceeding 300 $\\mathrm{{km} {s}^{-1}}$ with accelerations exceeding the solar gravity. Helical structure is the most important geometrical feature of the jet which shows evidence of untwisting. The jet structure appears strikingly different from each of the two STEREO viewpoints: face-on in the one viewpoint and edge-on in the other. This provides conclusive evidence that the observed helical structure is real and is not resulting...

  11. Stereoscopic, Force-Feedback Trainer For Telerobot Operators

    Science.gov (United States)

    Kim, Won S.; Schenker, Paul S.; Bejczy, Antal K.

    1994-01-01

    Computer-controlled simulator for training technicians to operate remote robots provides both visual and kinesthetic virtual reality. Used during initial stage of training; saves time and expense, increases operational safety, and prevents damage to robots by inexperienced operators. Computes virtual contact forces and torques of compliant robot in real time, providing operator with feel of forces experienced by manipulator as well as view in any of three modes: single view, two split views, or stereoscopic view. From keyboard, user specifies force-reflection gain and stiffness of manipulator hand for three translational and three rotational axes. System offers two simulated telerobotic tasks: insertion of peg in hole in three dimensions, and removal and insertion of drawer.

  12. The Extragalactic sky with the High Energy Stereoscopic System

    CERN Document Server

    ,

    2015-01-01

    The number of extragalactic sources detected at very hight energy (VHE, E$>$100GeV) has dramatically increased during the past years to reach more than fifty. The High Energy Stereoscopic System (H.E.S.S.) had observed the sky for more than 10 years now and discovered about twenty objects. With the advent of the fifth 28 meters telescope, the H.E.S.S. energy range extends down to ~30 GeV. When H.E.S.S. data are combined with the data of the Fermi Large area Telescope, the covered energy range is of several decades allowing an unprecedented description of the spectrum of extragalactic objects. In this talk, a review of the extragalactic sources studied with H.E.S.S. will be given together with first H.E.S.S. phase II results on extragalactic sources.

  13. Relationship between Stereoscopic Vision, Visual Perception, and Microstructure Changes of Corpus Callosum and Occipital White Matter in the 4-Year-Old Very Low Birth Weight Children

    Directory of Open Access Journals (Sweden)

    Przemko Kwinta

    2015-01-01

    Full Text Available Aim. To assess the relationship between stereoscopic vision, visual perception, and microstructure of the corpus callosum (CC and occipital white matter, 61 children born with a mean birth weight of 1024 g (SD 270 g were subjected to detailed ophthalmologic evaluation, Developmental Test of Visual Perception (DTVP-3, and diffusion tensor imaging (DTI at the age of 4. Results. Abnormal stereoscopic vision was detected in 16 children. Children with abnormal stereoscopic vision had smaller CC (CC length: 53±6 mm versus 61±4 mm; p<0.01; estimated CC area: 314±106 mm2 versus 446±79 mm2; p<0.01 and lower fractional anisotropy (FA values in CC (FA value of rostrum/genu: 0.7±0.09 versus 0.79±0.07; p<0.01; FA value of CC body: 0.74±0.13 versus 0.82±0.09; p=0.03. We found a significant correlation between DTVP-3 scores, CC size, and FA values in rostrum and body. This correlation was unrelated to retinopathy of prematurity. Conclusions. Visual perceptive dysfunction in ex-preterm children without major sequelae of prematurity depends on more subtle changes in the brain microstructure, including CC. Role of interhemispheric connections in visual perception might be more complex than previously anticipated.

  14. Cataract surgery: emotional reactions of patients with monocular versus binocular vision Cirurgia de catarata: aspectos emocionais de pacientes com visão monocular versus binocular

    Directory of Open Access Journals (Sweden)

    Roberta Ferrari Marback

    2012-12-01

    Full Text Available PURPOSE: To analyze emotional reactions related to cataract surgery in two groups of patients (monocular vision - Group 1; binocular vision - Group 2. METHODS: A transversal comparative study was performed using a structured questionnaire from a previous exploratory study before cataract surgery. RESULTS: 206 patients were enrolled in the study, 96 individuals in Group 1 (69.3 ± 10.4 years and 110 in Group 2 (68.2 ± 10.2 years. Most patients in group 1 (40.6% and 22.7% of group 2, reported fear of surgery (pOBJETIVO: Verificar reações emocionais relacionadas à cirurgia de catarata entre pacientes com visão monocular (Grupo 1 e binocular (Grupo 2. MÉTODOS: Foi realizado um estudo tranversal, comparativo por meio de um questionário estruturado respondido por pacientes antes da cirurgia de catarata. RESULTADOS: A amostra foi composta de 96 pacientes no Grupo 1 (69.3 ± 10.4 anos e 110 no Grupo 2 (68.2 ± 10.2 anos. Consideravam apresentar medo da cirugia 40.6% do Grupo 1 e 22.7% do Grupo 2 (p<0.001 e entre as principais causas do medo, a possibilidade de perda da visão, complicações cirúrgicas e a morte durante o procedimento foram apontadas. Os sentimentos mais comuns entre os dois grupos foram dúvidas a cerca dos resultados da cirurgia e o nervosismo diante do procedimento. CONCLUSÃO: Pacientes com visão monocular apresentaram mais medo e dúvidas relacionadas à cirurgia de catarata comparados com aqueles com visão binocular. Portanto, é necessário que os médicos considerem estas reações emocionais e invistam mais tempo para esclarecer os riscos e benefícios da cirurgia de catarata.

  15. Cirurgia monocular para esotropias de grande ângulo: histórico e novos paradigmas Monocular surgery for large-angle esotropias: review and new paradigms

    Directory of Open Access Journals (Sweden)

    Edmilson Gigante

    2010-08-01

    Full Text Available As primitivas cirurgias de estrabismo, as miotomias e as tenotomias, eram feitas, simplesmente, seccionando-se o músculo ou o seu tendão, sem nenhuma sutura. Estas cirurgias eram feitas, geralmente, em um só olho, tanto em pequenos como em grandes desvios e os resultados eram pouco previsíveis. Jameson, em 1922, propôs uma nova técnica cirúrgica, usando suturas e fixando, na esclera, o músculo seccionado, tornando a cirurgia mais previsível. Para as esotropias, praticou recuos de, no máximo, 5 mm para o reto medial, o que se tornou uma regra para os demais cirurgiões que o sucederam, sendo impossível, a partir daí, a correção de esotropias de grande ângulo com cirurgia monocular. Rodriguez-Vásquez, em 1974, superou o parâmetro de 5 mm, propondo amplos recuos dos retos mediais (6 a 9 mm para o tratamento da síndrome de Ciancia, com bons resultados. Os autores revisaram a literatura, ano a ano, objetivando comparar os vários trabalhos e, com isso, concluíram que a cirurgia monocular de recuo-ressecção pode constituir uma opção viável para o tratamento cirúrgico das esotropias de grande ângulo.The primitive strabismus surgeries, myotomies and tenotomies, were performed simply by sectioning the muscle or its tendon without any suture. Such surgeries were usually performed in just one eye both in small and in large angles with not really predictable results. In 1922, Jameson introduced a new surgery technique using sutures and fixing the sectioned muscle to the sclera, increasing surgery predictability. For the esotropias he carried out no more than 5 mm recession of the medial rectus, which became a rule for the surgeons who followed him, which made it impossible from then on to correct largeangle esotropias with a monocular surgery. Rodriguez-Vásquez, in 1974, exceeded the 5 mm parameter by proposing large recessions of the medial recti (6 to 9 mm to treat the Ciancia syndrome with good results. The authors revised the

  16. Evaluación de la reproducibilidad de la retinoscopia dinámica monocular de Merchán

    Directory of Open Access Journals (Sweden)

    Lizbeth Acuña

    2010-08-01

    Full Text Available Objetivo: Evaluar la reproducibilidad de la retinoscopia dinámica monocular y su nivel de acuerdo con la retinoscopia estática binocular y monocular, retinoscopia de Nott y Método Estimado Monocular (MEM. Métodos: Se determinó la reproducibilidad entre los evaluadores y entre los métodos por medio del coeficiente de correlación intraclase (CCI y se establecieron los límites de acuerdo de Bland y Altman. Resultados: Se evaluaron 126 personas entre 5 y 39 años y se encontró una baja reproducibilidad interexaminador de la retinoscopia dinámica monocular en ambos ojos CCI ojo derecho: 0.49 (IC95% 0.36; 0.51; ojo izquierdo 0.51 (IC95% 0.38; 0.59. El límite de acuerdo entre evaluadores fue ±1.25 D. Al evaluar la reproducibilidad entre la retinoscopia dinámica monocular y la estática se observó que la mayor reproducibilidad se obtuvo con la estática binocular y monocular y, en visión próxima, entre el método estimado monocular y la retinoscopia de Nott. Conclusiones: La retinoscopia dinámica monocular no es una prueba reproducible y presenta diferencias clínicas significativas para determinar el estado refractivo, en cuanto a poder dióptrico y tipo de ametropía, por tanto, no se puede considerar dentro de la batería de exámenes aplicados para determinar diagnósticos y correcciones refractivas tanto en la visión lejana como en la visión próxima.

  17. Evaluación de la reproducibilidad de la retinoscopia dinámica monocular de Merchán

    Directory of Open Access Journals (Sweden)

    Lizbeth Acuña

    2009-12-01

    Full Text Available Objetivo: Evaluar la reproducibilidad de la retinoscopia dinámica monocular y su nivel de acuerdo con la retinoscopia estática binocular y monocular, retinoscopia de Nott y Método Estimado Monocular (MEM.Métodos: Se determinó la reproducibilidad entre los evaluadores y entre los métodos por medio del coeficiente de correlación intraclase (CCI y se establecieron los límites de acuerdo de Bland y Altman.Resultados: Se evaluaron 126 personas entre 5 y 39 años y se encontró una baja reproducibilidad interexaminador de la retinoscopia dinámica monocular en ambos ojos CCI ojo derecho: 0.49 (IC95% 0.36; 0.51; ojo izquierdo 0.51 (IC95% 0.38; 0.59. El límite de acuerdo entre evaluadores fue ±1.25 D. Al evaluar la reproducibilidad entre la retinoscopia dinámica monocular y la estática se observó que la mayor reproducibilidad se obtuvo con la estática binocular y monocular y, en visión próxima, entre el método estimado monocular y la retinoscopia de Nott.Conclusiones: La retinoscopia dinámica monocular no es una prueba reproducible y presenta diferencias clínicas significativas para determinar el estado refractivo, en cuanto a poder dióptrico y tipo de ametropía, por tanto, no se puede considerar dentro de la batería de exámenes aplicados para determinar diagnósticos y correcciones refractivas tanto en la visión lejana como en la visión próxima.

  18. Influence of stereoscopic vision on task performance with an operating microscope

    NARCIS (Netherlands)

    Nibourg, Lisanne M.; Wanders, Wouter; Cornelissen, Frans W.; Koopmans, Steven A.

    PURPOSE: To determine the extent to which stereoscopic depth perception influences the performance of tasks executed under an operating microscope. SETTING: Laboratory of Experimental Ophthalmology, University Medical Center Groningen, the Netherlands. DESIGN: Experimental study. METHODS: Medical

  19. Panoramic Stereoscopic Video System for Remote-Controlled Robotic Space Operations Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this project, the development of a novel panoramic, stereoscopic video system was proposed. The proposed system, which contains no moving parts, uses three-fixed...

  20. Influence of stereoscopic vision on task performance with an operating microscope

    NARCIS (Netherlands)

    Nibourg, Lisanne M.; Wanders, Wouter; Cornelissen, Frans W.; Koopmans, Steven A.

    2015-01-01

    PURPOSE: To determine the extent to which stereoscopic depth perception influences the performance of tasks executed under an operating microscope. SETTING: Laboratory of Experimental Ophthalmology, University Medical Center Groningen, the Netherlands. DESIGN: Experimental study. METHODS: Medical st

  1. Operation and maintenance manual for the high resolution stereoscopic video camera system (HRSVS) system 6230

    Energy Technology Data Exchange (ETDEWEB)

    Pardini, A.F., Westinghouse Hanford

    1996-07-16

    The High Resolution Stereoscopic Video Cameral System (HRSVS),system 6230, is a stereoscopic camera system that will be used as an end effector on the LDUA to perform surveillance and inspection activities within Hanford waste tanks. It is attached to the LDUA by means of a Tool Interface Plate (TIP), which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate.

  2. Monocular Depth Perception and Robotic Grasping of Novel Objects

    Science.gov (United States)

    2009-06-01

    in which local features were insufficient and more contextual information had to be used. Examples include image denoising [92], stereo vision [155... partially visible in the image (e.g., Fig. 3.2, row 2: tree on the left). For a point lying on such an object, most of the point’s neighbors lie outside...proved the equivalence of force-closure analysis with the study of the equilibria of an ordinary differential equation . All of these methods focussed

  3. Embolic and nonembolic transient monocular visual field loss: a clinicopathologic review.

    Science.gov (United States)

    Petzold, Axel; Islam, Niaz; Hu, Han-Hwa; Plant, Gordon T

    2013-01-01

    Transient monocular blindness and amaurosis fugax are umbrella terms describing a range of patterns of transient monocular visual field loss (TMVL). The incidence rises from ≈1.5/100,000 in the third decade of life to ≈32/100,000 in the seventh decade of life. We review the vascular supply of the retina that provides an anatomical basis for the types of TMVL and discuss the importance of collaterals between the external and internal carotid artery territories and related blood flow phenomena. Next, we address the semiology of TMVL, focusing on onset, pattern, trigger factors, duration, recovery, frequency-associated features such as headaches, and on tests that help with the important differential between embolic and non-embolic etiologies.

  4. A monocular vision system based on cooperative targets detection for aircraft pose measurement

    Science.gov (United States)

    Wang, Zhenyu; Wang, Yanyun; Cheng, Wei; Chen, Tao; Zhou, Hui

    2017-08-01

    In this paper, a monocular vision measurement system based on cooperative targets detection is proposed, which can capture the three-dimensional information of objects by recognizing the checkerboard target and calculating of the feature points. The aircraft pose measurement is an important problem for aircraft’s monitoring and control. Monocular vision system has a good performance in the range of meter. This paper proposes an algorithm based on coplanar rectangular feature to determine the unique solution of distance and angle. A continuous frame detection method is presented to solve the problem of corners’ transition caused by symmetry of the targets. Besides, a displacement table test system based on three-dimensional precision and measurement system human-computer interaction software has been built. Experiment result shows that it has a precision of 2mm in the range of 300mm to 1000mm, which can meet the requirement of the position measurement in the aircraft cabin.

  5. Monocular trajectory intersection method for 3D motion measurement of a point target

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This article proposes a monocular trajectory intersection method,a videometrics measurement with a mature theoretical system to solve the 3D motion parameters of a point target.It determines the target’s motion parameters including its 3D trajectory and velocity by intersecting the parametric trajectory of a motion target and series of sight-rays by which a motion camera observes the target,in contrast with the regular intersection method for 3D measurement by which the sight-rays intersect at one point.The method offers an approach to overcome the technical failure of traditional monocular measurements for the 3D motion of a point target and thus extends the application fields of photogrammetry and computer vision.Wide application is expected in passive observations of motion targets on various mobile beds.

  6. A Novel Ship-Bridge Collision Avoidance System Based on Monocular Computer Vision

    Directory of Open Access Journals (Sweden)

    Yuanzhou Zheng

    2013-06-01

    Full Text Available The study aims to investigate the ship-bridge collision avoidance. A novel system for ship-bridge collision avoidance based on monocular computer vision is proposed in this study. In the new system, the moving ships are firstly captured by the video sequences. Then the detection and tracking of the moving objects have been done to identify the regions in the scene that correspond to the video sequences. Secondly, the quantity description of the dynamic states of the moving objects in the geographical coordinate system, including the location, velocity, orientation, etc, has been calculated based on the monocular vision geometry. Finally, the collision risk is evaluated and consequently the ship manipulation commands are suggested, aiming to avoid the potential collision. Both computer simulation and field experiments have been implemented to validate the proposed system. The analysis results have shown the effectiveness of the proposed system.

  7. Monocular trajectory intersection method for 3D motion measurement of a point target

    Institute of Scientific and Technical Information of China (English)

    YU QiFeng; SHANG Yang; ZHOU Jian; ZHANG XiaoHu; LI LiChun

    2009-01-01

    This article proposes a monocular trajectory intersection method,a videometrics measurement with a mature theoretical system to solve the 3D motion parameters of a point target.It determines the target's motion parameters including its 3D trajectory and velocity by intersecting the parametric trajectory of a motion target and series of sight-rays by which a motion camera observes the target,in contrast with the regular intersection method for 3D measurement by which the sight-rays intersect at one point.The method offers an approach to overcome the technical failure of traditional monocular measurements for the 3D motion of a point target and thus extends the application fields of photogrammetry and computer vision.Wide application is expected in passive observations of motion targets on various mobile beds.

  8. Large-scale monocular FastSLAM2.0 acceleration on an embedded heterogeneous architecture

    Science.gov (United States)

    Abouzahir, Mohamed; Elouardi, Abdelhafid; Bouaziz, Samir; Latif, Rachid; Tajer, Abdelouahed

    2016-12-01

    Simultaneous localization and mapping (SLAM) is widely used in many robotic applications and autonomous navigation. This paper presents a study of FastSLAM2.0 computational complexity based on a monocular vision system. The algorithm is intended to operate with many particles in a large-scale environment. FastSLAM2.0 was partitioned into functional blocks allowing a hardware software matching on a CPU-GPGPU-based SoC architecture. Performances in terms of processing time and localization accuracy were evaluated using a real indoor dataset. Results demonstrate that an optimized and efficient CPU-GPGPU partitioning allows performing accurate localization results and high-speed execution of a monocular FastSLAM2.0-based embedded system operating under real-time constraints.

  9. A Review on Stereoscopic 3D: Home Entertainment for the Twenty First Century

    Science.gov (United States)

    Karajeh, Huda; Maqableh, Mahmoud; Masa'deh, Ra'ed

    2014-12-01

    In the last few years, stereoscopic developed very rapidly and employed in many different fields such as entertainment. Due to the importance of entertainment aspect of stereoscopic 3D (S3D) applications, a review of the current state of S3D development in entertainment technology is conducted. In this paper, a novel survey of the stereoscopic entertainment aspects is presented by discussing the significant development of a 3D cinema, the major development of 3DTV, the issues related to 3D video content and 3D video games. Moreover, we reviewed some problems that can be caused in the viewers' visual system from watching stereoscopic contents. Some stereoscopic viewers are not satisfied as they are frustrated from wearing glasses, have visual fatigue, complain from unavailability of 3D contents, and/or complain from some sickness. Therefore, we will discuss stereoscopic visual discomfort and to what extend the viewer will have an eye fatigue while watching 3D contents or playing 3D games. The suggested solutions in the literature for this problem are discussed.

  10. Stereoscopic perception of women in real and virtual environments: A study towards educational neuroscience

    Directory of Open Access Journals (Sweden)

    Georgios K. Zacharis

    2013-10-01

    Full Text Available Previous studies report the involvement of specific brain activation in stereoscopic vision and the perception of depth information. This work presents the first comparative results of adult women on the effects of stereoscopic perception in three different static environments; a real, a two dimensional (2D and a stereoscopic three dimensional (3D, all with the same content. Electric brain activity of 36 female students was analyzed at θ, α, β and γ frequency bands. Results in alpha rhythm as well as alpha desynchronization showed that the topology of cerebral activity is the same in the three environments. The participants experienced three similar and non-demanding environments without specific memory requirements and information encoding. Statistical differences in theta activity showed that the real and 3D environments caused similar cognitive processes, while the 2D caused an increase of anxiety indicating that perhaps participants were looking for the third dimension. Beta and gamma activity showed that participants perceived the third dimension of the stereoscopic environment as in the real one, something that did not happen in the 2D environment. Our findings indicate that stereoscopic 3D virtual environments seem to approximate the real ones as far as it regards the cognitive processes they cause. Three dimensional stereoscopic environments increase users’ attention over the 2D and cause less mental effort. These experimental results support the new field of educational neuroscience and its potential to the design of digital learning environments.

  11. A Case of Recurrent Transient Monocular Visual Loss after Receiving Sildenafil

    Directory of Open Access Journals (Sweden)

    Asaad Ghanem Ghanem

    2011-01-01

    Full Text Available A 53-year-old man was attended to the Clinic Ophthalmic Center, Mansoura University, Egypt, with recurrent transient monocular visual loss after receiving sildenafil citrate (Viagra for erectile dysfunction. Examination for possible risk factors revealed mild hypercholesterolemia. Family history showed that his father had suffered from bilateral nonarteritic anterior ischemic optic neuropathy (NAION. Physicians might look for arteriosclerotic risk factors and family history of NAION among predisposing risk factors before prescribing sildenafil erectile dysfunction drugs.

  12. Benign pituitary adenoma associated with hyperostosis of the spenoid bone and monocular blindness. Case report.

    Science.gov (United States)

    Milas, R W; Sugar, O; Dobben, G

    1977-01-01

    The authors describe a case of benign chromophobe adenoma associated with hyperostosis of the lesser wing of the sphenoid bone and monocular blindness in a 38-year-old woman. The endocrinological and radiological evaluations were all suggestive of a meningioma. The diagnosis was established by biopsy of the tumor mass. After orbital decompression and removal of the tumor, the patient was treated with radiation therapy. Her postoperative course was uneventful, and her visual defects remained fixed.

  13. Stereoscopic motion analysis in densely packed clusters: 3D analysis of the shimmering behaviour in Giant honey bees

    Directory of Open Access Journals (Sweden)

    Hoetzl Thomas

    2011-02-01

    Full Text Available Abstract Background The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Results Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb of individual bees over time. Conclusions The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method

  14. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    Science.gov (United States)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  15. Monocular blur alters the tuning characteristics of stereopsis for spatial frequency and size.

    Science.gov (United States)

    Li, Roger W; So, Kayee; Wu, Thomas H; Craven, Ashley P; Tran, Truyet T; Gustafson, Kevin M; Levi, Dennis M

    2016-09-01

    Our sense of depth perception is mediated by spatial filters at different scales in the visual brain; low spatial frequency channels provide the basis for coarse stereopsis, whereas high spatial frequency channels provide for fine stereopsis. It is well established that monocular blurring of vision results in decreased stereoacuity. However, previous studies have used tests that are broadband in their spatial frequency content. It is not yet entirely clear how the processing of stereopsis in different spatial frequency channels is altered in response to binocular input imbalance. Here, we applied a new stereoacuity test based on narrow-band Gabor stimuli. By manipulating the carrier spatial frequency, we were able to reveal the spatial frequency tuning of stereopsis, spanning from coarse to fine, under blurred conditions. Our findings show that increasing monocular blur elevates stereoacuity thresholds 'selectively' at high spatial frequencies, gradually shifting the optimum frequency to lower spatial frequencies. Surprisingly, stereopsis for low frequency targets was only mildly affected even with an acuity difference of eight lines on a standard letter chart. Furthermore, we examined the effect of monocular blur on the size tuning function of stereopsis. The clinical implications of these findings are discussed.

  16. Short-term monocular patching boosts the patched eye’s response in visual cortex

    Science.gov (United States)

    Zhou, Jiawei; Baker, Daniel H.; Simard, Mathieu; Saint-Amour, Dave; Hess, Robert F.

    2015-01-01

    Abstract Purpose: Several recent studies have demonstrated that following short-term monocular deprivation in normal adults, the patched eye, rather than the unpatched eye, becomes stronger in subsequent binocular viewing. However, little is known about the site and nature of the underlying processes. In this study, we examine the underlying mechanisms by measuring steady-state visual evoked potentials (SSVEPs) as an index of the neural contrast response in early visual areas. Methods: The experiment consisted of three consecutive stages: a pre-patching EEG recording (14 minutes), a monocular patching stage (2.5 hours) and a post-patching EEG recording (14 minutes; started immediately after the removal of the patch). During the patching stage, a diffuser (transmits light but not pattern) was placed in front of one randomly selected eye. During the EEG recording stage, contrast response functions for each eye were measured. Results: The neural responses from the patched eye increased after the removal of the patch, whilst the responses from the unpatched eye remained the same. Such phenomena occurred under both monocular and dichoptic viewing conditions. Conclusions: We interpret this eye dominance plasticity in adult human visual cortex as homeostatic intrinsic plasticity regulated by an increase of contrast-gain in the patched eye. PMID:26410580

  17. Stereoscopic Height and Wind Retrievals for Aerosol Plumes with the MISR INteractive eXplorer (MINX

    Directory of Open Access Journals (Sweden)

    David L. Nelson

    2013-09-01

    Full Text Available The Multi-angle Imaging SpectroRadiometer (MISR instrument aboard the Terra satellite acquires imagery at 275-m resolution at nine angles ranging from 0° (nadir to 70° off-nadir. This multi-angle capability facilitates the stereoscopic retrieval of heights and motion vectors for clouds and aerosol plumes. MISR’s operational stereo product uses this capability to retrieve cloud heights and winds for every satellite orbit, yielding global coverage every nine days. The MISR INteractive eXplorer (MINX visualization and analysis tool complements the operational stereo product by providing users the ability to retrieve heights and winds locally for detailed studies of smoke, dust and volcanic ash plumes, as well as clouds, at higher spatial resolution and with greater precision than is possible with the operational product or with other space-based, passive, remote sensing instruments. This ability to investigate plume geometry and dynamics is becoming increasingly important as climate and air quality studies require greater knowledge about the injection of aerosols and the location of clouds within the atmosphere. MINX incorporates features that allow users to customize their stereo retrievals for optimum results under varying aerosol and underlying surface conditions. This paper discusses the stereo retrieval algorithms and retrieval options in MINX, and provides appropriate examples to explain how the program can be used to achieve the best results.

  18. How to "hear" visual disparities: real-time stereoscopic spatial depth analysis using temporal resonance.

    Science.gov (United States)

    Porr, B; Cozzi, A; Wörgötter, F

    1998-05-01

    In a stereoscopic system, both eyes or cameras have a slightly different view. As a consequence, small variations between the projected images exist ('disparities') which are spatially evaluated in order to retrieve depth information (Sanger 1988; Fleet et al. 1991). A strong similarity exists between the analysis of visual disparities and the determination of the azimuth of a sound source (Wagner and Frost 1993). The direction of the sound is thereby determined from the temporal delay between the left and right ear signals (Konishi and Sullivan 1986). Similarly, here we transpose the spatially defined problem of disparity analysis into the temporal domain and utilize two resonators implemented in the form of causal (electronic) filters to determine the disparity as local temporal phase differences between the left and right filter responses. This approach permits real-time analysis and can be solved analytically for a step function contrast change, which is an important case in all real-world applications. The proposed theoretical framework for spatial depth retrieval directly utilizes a temporal algorithm borrowed from auditory signal analysis. Thus, the suggested similarity between the visual and the auditory system in the brain (Wagner and Frost 1993) finds its analogy here at the algorithmical level. We will compare the results from the temporal resonance algorithm with those obtained from several other techniques like cross-correlation or spatial phase-based disparity estimation showing that the novel algorithm achieves performances similar to the 'classical' approaches using much lower computational resources.

  19. Real-time Stereoscopic 3D for E-Robotics Learning

    Directory of Open Access Journals (Sweden)

    Richard Y. Chiou

    2011-02-01

    Full Text Available Following the design and testing of a successful 3-Dimensional surveillance system, this 3D scheme has been implemented into online robotics learning at Drexel University. A real-time application, utilizing robot controllers, programmable logic controllers and sensors, has been developed in the “MET 205 Robotics and Mechatronics” class to provide the students with a better robotic education. The integration of the 3D system allows the students to precisely program the robot and execute functions remotely. Upon the students’ recommendation, polarization has been chosen to be the main platform behind the 3D robotic system. Stereoscopic calculations are carried out for calibration purposes to display the images with the highest possible comfort-level and 3D effect. The calculations are further validated by comparing the results with students’ evaluations. Due to the Internet-based feature, multiple clients have the opportunity to perform the online automation development. In the future, students, in different universities, will be able to cross-control robotic components of different types around the world. With the development of this 3D ERobotics interface, automation resources and robotic learning can be shared and enriched regardless of location.

  20. Stereoscopic Height and Wind Retrievals for Aerosol Plumes with the MISR INteractive eXplorer (MINX)

    Science.gov (United States)

    Nelson, D.L.; Garay, M.J.; Kahn, Ralph A.; Dunst, Ben A.

    2013-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard the Terra satellite acquires imagery at 275-m resolution at nine angles ranging from 0deg (nadir) to 70deg off-nadir. This multi-angle capability facilitates the stereoscopic retrieval of heights and motion vectors for clouds and aerosol plumes. MISR's operational stereo product uses this capability to retrieve cloud heights and winds for every satellite orbit, yielding global coverage every nine days. The MISR INteractive eXplorer (MINX) visualization and analysis tool complements the operational stereo product by providing users the ability to retrieve heights and winds locally for detailed studies of smoke, dust and volcanic ash plumes, as well as clouds, at higher spatial resolution and with greater precision than is possible with the operational product or with other space-based, passive, remote sensing instruments. This ability to investigate plume geometry and dynamics is becoming increasingly important as climate and air quality studies require greater knowledge about the injection of aerosols and the location of clouds within the atmosphere. MINX incorporates features that allow users to customize their stereo retrievals for optimum results under varying aerosol and underlying surface conditions. This paper discusses the stereo retrieval algorithms and retrieval options in MINX, and provides appropriate examples to explain how the program can be used to achieve the best results.