WorldWideScience

Sample records for monocular stereoscopic images

  1. Digital stereoscopic imaging

    Science.gov (United States)

    Rao, A. Ravishankar; Jaimes, Alejandro

    1999-05-01

    The convergence of inexpensive digital cameras and cheap hardware for displaying stereoscopic images has created the right conditions for the proliferation of stereoscopic imagin applications. One application, which is of growing importance to museums and cultural institutions, consists of capturing and displaying 3D images of objects at multiple orientations. In this paper, we present our stereoscopic imaging system and methodology for semi-automatically capturing multiple orientation stereo views of objects in a studio setting, and demonstrate the superiority of using a high resolution, high fidelity digital color camera for stereoscopic object photography. We show the superior performance achieved with the IBM TDI-Pro 3000 digital camera developed at IBM Research. We examine various choices related to the camera parameters, image capture geometry, and suggest a range of optimum values that work well in practice. We also examine the effect of scene composition and background selection on the quality of the stereoscopic image display. We will demonstrate our technique with turntable views of objects from the IBM Corporate Archive.

  2. Stereoscopic medical imaging collaboration system

    Science.gov (United States)

    Okuyama, Fumio; Hirano, Takenori; Nakabayasi, Yuusuke; Minoura, Hirohito; Tsuruoka, Shinji

    2007-02-01

    The computerization of the clinical record and the realization of the multimedia have brought improvement of the medical service in medical facilities. It is very important for the patients to obtain comprehensible informed consent. Therefore, the doctor should plainly explain the purpose and the content of the diagnoses and treatments for the patient. We propose and design a Telemedicine Imaging Collaboration System which presents a three dimensional medical image as X-ray CT, MRI with stereoscopic image by using virtual common information space and operating the image from a remote location. This system is composed of two personal computers, two 15 inches stereoscopic parallax barrier type LCD display (LL-151D, Sharp), one 1Gbps router and 1000base LAN cables. The software is composed of a DICOM format data transfer program, an operation program of the images, the communication program between two personal computers and a real time rendering program. Two identical images of 512×768 pixcels are displayed on two stereoscopic LCD display, and both images show an expansion, reduction by mouse operation. This system can offer a comprehensible three-dimensional image of the diseased part. Therefore, the doctor and the patient can easily understand it, depending on their needs.

  3. The Enright phenomenon. Stereoscopic distortion of perceived driving speed induced by monocular pupil dilation.

    Science.gov (United States)

    Carkeet, Andrew; Wood, Joanne M; McNeill, Kylie M; McNeill, Hamish J; James, Joanna A; Holder, Leigh S

    The Enright phenomenon describes the distortion in speed perception experienced by an observer looking sideways from a moving vehicle when viewing with interocular differences in retinal image brightness, usually induced by neutral density filters. We investigated whether the Enright phenomenon could be induced with monocular pupil dilation using tropicamide. We tested 17 visually normal young adults on a closed road driving circuit. Participants were asked to travel at Goal Speeds of 40km/h and 60km/h while looking sideways from the vehicle with: (i) both eyes with undilated pupils; (ii) both eyes with dilated pupils; (iii) with the leading eye only dilated; and (iv) the trailing eye only dilated. For each condition we recorded actual driving speed. With the pupil of the leading eye dilated participants drove significantly faster (by an average of 3.8km/h) than with both eyes dilated (p=0.02); with the trailing eye dilated participants drove significantly slower (by an average of 3.2km/h) than with both eyes dilated (p<0.001). The speed, with the leading eye dilated, was faster by an average of 7km/h than with the trailing eye dilated (p<0.001). There was no significant difference between driving speeds when viewing with both eyes either dilated or undilated (p=0.322). Our results are the first to show a measurable change in driving behaviour following monocular pupil dilation and support predictions based on the Enright phenomenon. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  4. Monocular zones in stereoscopic scenes: A useful source of information for human binocular vision?

    Science.gov (United States)

    Harris, Julie M.

    2010-02-01

    When an object is closer to an observer than the background, the small differences between right and left eye views are interpreted by the human brain as depth. This basic ability of the human visual system, called stereopsis, lies at the core of all binocular three-dimensional (3-D) perception and related technological display development. To achieve stereopsis, it is traditionally assumed that corresponding locations in the right and left eye's views must first be matched, then the relative differences between right and left eye locations are used to calculate depth. But this is not the whole story. At every object-background boundary, there are regions of the background that only one eye can see because, in the other eye's view, the foreground object occludes that region of background. Such monocular zones do not have a corresponding match in the other eye's view and can thus cause problems for depth extraction algorithms. In this paper I will discuss evidence, from our knowledge of human visual perception, illustrating that monocular zones do not pose problems for our human visual systems, rather, our visual systems can extract depth from such zones. I review the relevant human perception literature in this area, and show some recent data aimed at quantifying the perception of depth from monocular zones. The paper finishes with a discussion of the potential importance of considering monocular zones, for stereo display technology and depth compression algorithms.

  5. Stereoscopic image production: live, CGI, and integration

    Science.gov (United States)

    Criado, Enrique

    2006-02-01

    This paper shortly describes part of the experience gathered in more than 10 years of stereoscopic movie production, some of the most common problems found and the solutions, with more or less fortune, we applied to solve those problems. Our work is mainly focused in the entertainment market, theme parks, museums, and other cultural related locations and events. In our movies, we have been forced to develop our own devices to permit correct stereo shooting (stereoscopic rigs) or stereo monitoring (real-time), and to solve problems found with conventional film editing, compositing and postproduction software. Here, we discuss stereo lighting, monitoring, special effects, image integration (using dummies and more), stereo-camera parameters, and other general 3-D movie production aspects.

  6. Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance.

    Directory of Open Access Journals (Sweden)

    Christopher A Mela

    Full Text Available We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b the first wearable system offering both large FOV and microscopic imaging simultaneously,

  7. Peculiarities of perception of stereoscopic radiation images in full colour

    International Nuclear Information System (INIS)

    Mamchev, G.V.

    1994-01-01

    The principles of coloring stereoscopic radiation images providing their three-dimensional structure distinguishing increase are discussed. The results of analytical and experimental studies dealing with estimation of the effect of stereoscopic image chromaticity on accuracy of metric operations realization in three-dimensional space are given. 5 refs., 1 fig., 1 tab

  8. Grey and white matter changes in children with monocular amblyopia: voxel-based morphometry and diffusion tensor imaging study.

    Science.gov (United States)

    Li, Qian; Jiang, Qinying; Guo, Mingxia; Li, Qingji; Cai, Chunquan; Yin, Xiaohui

    2013-04-01

    To investigate the potential morphological alterations of grey and white matter in monocular amblyopic children using voxel-based morphometry (VBM) and diffusion tensor imaging (DTI). A total of 20 monocular amblyopic children and 20 age-matched controls were recruited. Whole-brain MRI scans were performed after a series of ophthalmologic exams. The imaging data were processed and two-sample t-tests were employed to identify group differences in grey matter volume (GMV), white matter volume (WMV) and fractional anisotropy (FA). After image screening, there were 12 amblyopic participants and 15 normal controls qualified for the VBM analyses. For DTI analysis, 14 amblyopes and 14 controls were included. Compared to the normal controls, reduced GMVs were observed in the left inferior occipital gyrus, the bilateral parahippocampal gyrus and the left supramarginal/postcentral gyrus in the monocular amblyopic group, with the lingual gyrus presenting augmented GMV. Meanwhile, WMVs reduced in the left calcarine, the bilateral inferior frontal and the right precuneus areas, and growth in the WMVs was seen in the right cuneus, right middle occipital and left orbital frontal areas. Diminished FA values in optic radiation and increased FA in the left middle occipital area and right precuneus were detected in amblyopic patients. In monocular amblyopia, cortices related to spatial vision underwent volume loss, which provided neuroanatomical evidence of stereoscopic defects. Additionally, white matter development was also hindered due to visual defects in amblyopes. Growth in the GMVs, WMVs and FA in the occipital lobe and precuneus may reflect a compensation effect by the unaffected eye in monocular amblyopia.

  9. Smartphone Image Acquisition During Postmortem Monocular Indirect Ophthalmoscopy.

    Science.gov (United States)

    Lantz, Patrick E; Schoppe, Candace H; Thibault, Kirk L; Porter, William T

    2016-01-01

    The medical usefulness of smartphones continues to evolve as third-party applications exploit and expand on the smartphones' interface and capabilities. This technical report describes smartphone still-image capture techniques and video-sequence recording capabilities during postmortem monocular indirect ophthalmoscopy. Using these devices and techniques, practitioners can create photographic documentation of fundal findings, clinically and at autopsy, without the expense of a retinal camera. Smartphone image acquisition of fundal abnormalities can promote ophthalmological telemedicine--especially in regions or countries with limited resources--and facilitate prompt, accurate, and unbiased documentation of retinal hemorrhages in infants and young children. © 2015 American Academy of Forensic Sciences.

  10. Stereoscopic radiographic images with gamma source encoding

    International Nuclear Information System (INIS)

    Strocovsky, S.G.; Otero, D

    2012-01-01

    Conventional radiography with X-ray tube has several drawbacks, as the compromise between the size of the focal spot and the fluence. The finite dimensions of the focal spot impose a limit to the spatial resolution. Gamma radiography uses gamma-ray sources which surpass in size, portability and simplicity to X-ray tubes. However, its low intrinsic fluence forces to use extended sources that also degrade the spatial resolution. In this work, we show the principles of a new radiographic technique that overcomes the limitations associated with the finite dimensions of X-ray sources, and that offers additional benefits to conventional techniques. The new technique called coding source imaging (CSI), is based on the use of extended sources, edge-encoding of radiation and differential detection. The mathematical principles and the method of images reconstruction with the new proposed technique are explained in the present work. Analytical calculations were made to determine the maximum spatial resolution and the variables on which it depends. The CSI technique was tested by means of Monte Carlo simulations with sets of spherical objects. We show that CSI has stereoscopic capabilities and it can resolve objects smaller than the source size. The CSI decoding algorithm reconstructs simultaneously four different projections from the same object, while conventional radiography produces only one projection per acquisition. Projections are located in separate image fields on the detector plane. Our results show it is possible to apply an extremely simple radiographic technique with extended sources, and get 3D information of the attenuation coefficient distribution for simple geometry objects in a single acquisition. The results are promising enough to evaluate the possibility of future research with more complex objects typical of medical diagnostic radiography and industrial gamma radiography (author)

  11. Stereoscopic radiographic images with thermal neutrons

    International Nuclear Information System (INIS)

    Silvani, M.I.; Almeida, G.L.; Rogers, J.D.; Lopes, R.T.

    2011-01-01

    Spatial structure of an object can be perceived by the stereoscopic vision provided by eyes or by the parallax produced by movement of the object with regard to the observer. For an opaque object, a technique to render it transparent should be used, in order to make visible the spatial distribution of its inner structure, for any of the two approaches used. In this work, a beam of thermal neutrons at the main port of the Argonauta research reactor of the Instituto de Engenharia Nuclear in Rio de Janeiro/Brazil has been used as radiation to render the inspected objects partially transparent. A neutron sensitive Imaging Plate has been employed as a detector and after exposure it has been developed by a reader using a 0.5 μm laser beam, which defines the finest achievable spatial resolution of the acquired digital image. This image, a radiographic attenuation map of the object, does not represent any specific cross-section but a convoluted projection for each specific attitude of the object with regard to the detector. After taking two of these projections at different object attitudes, they are properly processed and the final image is viewed by a red and green eyeglass. For monochromatic images this processing involves transformation of black and white radiographies into red and white and green and white ones, which are afterwards merged to yield a single image. All the processes are carried out with the software ImageJ. Divergence of the neutron beam unfortunately spoils both spatial and contrast resolutions, which become poorer as object-detector distance increases. Therefore, in order to evaluate the range of spatial resolution corresponding to the 3D image being observed, a curve expressing spatial resolution against object-detector gap has been deduced from the Modulation Transfer Functions experimentally. Typical exposure times, under a reactor power of 170 W, were 6 min for both quantitative and qualitative measurements. In spite of its intrinsic constraints

  12. Stereoscopic radiographic images with thermal neutrons

    Science.gov (United States)

    Silvani, M. I.; Almeida, G. L.; Rogers, J. D.; Lopes, R. T.

    2011-10-01

    Spatial structure of an object can be perceived by the stereoscopic vision provided by eyes or by the parallax produced by movement of the object with regard to the observer. For an opaque object, a technique to render it transparent should be used, in order to make visible the spatial distribution of its inner structure, for any of the two approaches used. In this work, a beam of thermal neutrons at the main port of the Argonauta research reactor of the Instituto de Engenharia Nuclear in Rio de Janeiro/Brazil has been used as radiation to render the inspected objects partially transparent. A neutron sensitive Imaging Plate has been employed as a detector and after exposure it has been developed by a reader using a 0.5 μm laser beam, which defines the finest achievable spatial resolution of the acquired digital image. This image, a radiographic attenuation map of the object, does not represent any specific cross-section but a convoluted projection for each specific attitude of the object with regard to the detector. After taking two of these projections at different object attitudes, they are properly processed and the final image is viewed by a red and green eyeglass. For monochromatic images this processing involves transformation of black and white radiographies into red and white and green and white ones, which are afterwards merged to yield a single image. All the processes are carried out with the software ImageJ. Divergence of the neutron beam unfortunately spoils both spatial and contrast resolutions, which become poorer as object-detector distance increases. Therefore, in order to evaluate the range of spatial resolution corresponding to the 3D image being observed, a curve expressing spatial resolution against object-detector gap has been deduced from the Modulation Transfer Functions experimentally. Typical exposure times, under a reactor power of 170 W, were 6 min for both quantitative and qualitative measurements. In spite of its intrinsic constraints

  13. 3D display system using monocular multiview displays

    Science.gov (United States)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2002-05-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have researched the virtual-reality systems connected with computer networks for real-time remote control and developed a low-priced real-time 3D display for building these systems. We developed a 3D HMD system using monocular multi-view displays. The 3D displaying technique of this monocular multi-view display is based on the concept of the super multi-view proposed by Kajiki at TAO (Telecommunications Advancement Organization of Japan) in 1996. Our 3D HMD has two monocular multi-view displays (used as a visual display unit) in order to display a picture to the left eye and the right eye. The left and right images are a pair of stereoscopic images for the left and right eyes, then stereoscopic 3D images are observed.

  14. Disparity modifications and the emotional effects of stereoscopic images

    Science.gov (United States)

    Kawai, Takashi; Atsuta, Daiki; Tomiyama, Yuya; Kim, Sanghyun; Morikawa, Hiroyuki; Mitsuya, Reiko; Häkkinen, Jukka

    2014-03-01

    This paper describes a study that focuses on disparity changes in emotional scenes of stereoscopic (3D) images, in which an examination of the effects on pleasant and arousal was carried out by adding binocular disparity to 2D images that evoke specific emotions, and applying disparity modification based on the disparity analysis of famous 3D movies. From the results of the experiment, for pleasant, a significant difference was found only for the main effect of the emotions. On the other hand, for arousal, there was a trend of increasing the evaluation values in the order 2D condition, 3D condition and 3D condition applied the disparity modification for happiness, surprise, and fear. This suggests the possibility that binocular disparity and the modification affect arousal.

  15. Visual perception and stereoscopic imaging: an artist's perspective

    Science.gov (United States)

    Mason, Steve

    2015-03-01

    This paper continues my 2014 February IS and T/SPIE Convention exploration into the relationship of stereoscopic vision and consciousness (90141F-1). It was proposed then that by using stereoscopic imaging people may consciously experience, or see, what they are viewing and thereby help make them more aware of the way their brains manage and interpret visual information. Environmental imaging was suggested as a way to accomplish this. This paper is the result of further investigation, research, and follow-up imaging. A show of images, that is a result of this research, allows viewers to experience for themselves the effects of stereoscopy on consciousness. Creating dye-infused aluminum prints while employing ChromaDepth® 3D glasses, I hope to not only raise awareness of visual processing but also explore the differences and similarities between the artist and scientist―art increases right brain spatial consciousness, not only empirical thinking, while furthering the viewer's cognizance of the process of seeing. The artist must abandon preconceptions and expectations, despite what the evidence and experience may indicate in order to see what is happening in his work and to allow it to develop in ways he/she could never anticipate. This process is then revealed to the viewer in a show of work. It is in the experiencing, not just from the thinking, where insight is achieved. Directing the viewer's awareness during the experience using stereoscopic imaging allows for further understanding of the brain's function in the visual process. A cognitive transformation occurs, the preverbal "left/right brain shift," in order for viewers to "see" the space. Using what we know from recent brain research, these images will draw from certain parts of the brain when viewed in two dimensions and different ones when viewed stereoscopically, a shift, if one is looking for it, which is quite noticeable. People who have experienced these images in the context of examining their own

  16. The monocular visual imaging technology model applied in the airport surface surveillance

    Science.gov (United States)

    Qin, Zhe; Wang, Jian; Huang, Chao

    2013-08-01

    At present, the civil aviation airports use the surface surveillance radar monitoring and positioning systems to monitor the aircrafts, vehicles and the other moving objects. Surface surveillance radars can cover most of the airport scenes, but because of the terminals, covered bridges and other buildings geometry, surface surveillance radar systems inevitably have some small segment blind spots. This paper presents a monocular vision imaging technology model for airport surface surveillance, achieving the perception of scenes of moving objects such as aircrafts, vehicles and personnel location. This new model provides an important complement for airport surface surveillance, which is different from the traditional surface surveillance radar techniques. Such technique not only provides clear objects activities screen for the ATC, but also provides image recognition and positioning of moving targets in this area. Thereby it can improve the work efficiency of the airport operations and avoid the conflict between the aircrafts and vehicles. This paper first introduces the monocular visual imaging technology model applied in the airport surface surveillance and then the monocular vision measurement accuracy analysis of the model. The monocular visual imaging technology model is simple, low cost, and highly efficient. It is an advanced monitoring technique which can make up blind spot area of the surface surveillance radar monitoring and positioning systems.

  17. Application of stereoscopic particle image velocimetry to studies of transport in a dusty (complex) plasma

    International Nuclear Information System (INIS)

    Thomas, Edward Jr.; Williams, Jeremiah D.; Silver, Jennifer

    2004-01-01

    Over the past 5 years, two-dimensional particle image velocimetry (PIV) techniques [E. Thomas, Jr., Phys. Plasmas 6, 2672 (1999)] have been used to obtain detailed measurements of microparticle transport in dusty plasmas. This Letter reports on an extension of these techniques to a three-dimensional velocity vector measurement approach using stereoscopic PIV. Initial measurements using the stereoscopic PIV diagnostic are presented

  18. Many-core computing for space-based stereoscopic imaging

    Science.gov (United States)

    McCall, Paul; Torres, Gildo; LeGrand, Keith; Adjouadi, Malek; Liu, Chen; Darling, Jacob; Pernicka, Henry

    The potential benefits of using parallel computing in real-time visual-based satellite proximity operations missions are investigated. Improvements in performance and relative navigation solutions over single thread systems can be achieved through multi- and many-core computing. Stochastic relative orbit determination methods benefit from the higher measurement frequencies, allowing them to more accurately determine the associated statistical properties of the relative orbital elements. More accurate orbit determination can lead to reduced fuel consumption and extended mission capabilities and duration. Inherent to the process of stereoscopic image processing is the difficulty of loading, managing, parsing, and evaluating large amounts of data efficiently, which may result in delays or highly time consuming processes for single (or few) processor systems or platforms. In this research we utilize the Single-Chip Cloud Computer (SCC), a fully programmable 48-core experimental processor, created by Intel Labs as a platform for many-core software research, provided with a high-speed on-chip network for sharing information along with advanced power management technologies and support for message-passing. The results from utilizing the SCC platform for the stereoscopic image processing application are presented in the form of Performance, Power, Energy, and Energy-Delay-Product (EDP) metrics. Also, a comparison between the SCC results and those obtained from executing the same application on a commercial PC are presented, showing the potential benefits of utilizing the SCC in particular, and any many-core platforms in general for real-time processing of visual-based satellite proximity operations missions.

  19. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    Science.gov (United States)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  20. Stereoscopic Visualization of Diffusion Tensor Imaging Data: A Comparative Survey of Visualization Techniques

    International Nuclear Information System (INIS)

    Raslan, O.; Debnam, J.M.; Ketonen, L.; Kumar, A.J.; Schellingerhout, D.; Wang, J.

    2013-01-01

    Diffusion tensor imaging (DTI) data has traditionally been displayed as a gray scale functional anisotropy map (GSFM) or color coded orientation map (CCOM). These methods use black and white or color with intensity values to map the complex multidimensional DTI data to a two-dimensional image. Alternative visualization techniques, such as V m ax maps utilize enhanced graphical representation of the principal eigenvector by means of a headless arrow on regular non stereoscopic (VM) or stereoscopic display (VMS). A survey of clinical utility of patients with intracranial neoplasms was carried out by 8 neuro radiologists using traditional and nontraditional methods of DTI display. Pairwise comparison studies of 5 intracranial neoplasms were performed with a structured questionnaire comparing GSFM, CCOM, VM, and VMS. Six of 8 neuro radiologists favored V m ax maps over traditional methods of display (GSFM and CCOM). When comparing the stereoscopic (VMS) and the non-stereoscopic (VM) modes, 4 favored VMS, 2 favored VM, and 2 had no preference. In conclusion, processing and visualizing DTI data stereoscopically is technically feasible. An initial survey of users indicated that V m ax based display methodology with or without stereoscopic visualization seems to be preferred over traditional methods to display DTI data.

  1. Subjective and objective measurements of visual fatigue induced by excessive disparities in stereoscopic images

    Science.gov (United States)

    Jung, Yong Ju; Kim, Dongchan; Sohn, Hosik; Lee, Seong-il; Park, Hyun Wook; Ro, Yong Man

    2013-03-01

    As stereoscopic displays have spread, it is important to know what really causes the visual fatigue and discomfort and what happens in the visual system in the brain behind the retina while viewing stereoscopic 3D images on the displays. In this study, functional magnetic resonance imaging (fMRI) was used for the objective measurement to assess the human brain regions involved in the processing of the stereoscopic stimuli with excessive disparities. Based on the subjective measurement results, we selected two subsets of comfort videos and discomfort videos in our dataset. Then, a fMRI experiment was conducted with the subsets of comfort and discomfort videos in order to identify which brain regions activated while viewing the discomfort videos in a stereoscopic display. We found that, when viewing a stereoscopic display, the right middle frontal gyrus, the right inferior frontal gyrus, the right intraparietal lobule, the right middle temporal gyrus, and the bilateral cuneus were significantly activated during the processing of excessive disparities, compared to those of small disparities (< 1 degree).

  2. Stereoscopic Three-Dimensional Visualization Applied to Multimodal Brain Images: Clinical Applications and a Functional Connectivity Atlas.

    Directory of Open Access Journals (Sweden)

    Gonzalo M Rojas

    2014-11-01

    Full Text Available Effective visualization is central to the exploration and comprehension of brain imaging data. While MRI data are acquired in three-dimensional space, the methods for visualizing such data have rarely taken advantage of three-dimensional stereoscopic technologies. We present here results of stereoscopic visualization of clinical data, as well as an atlas of whole-brain functional connectivity. In comparison with traditional 3D rendering techniques, we demonstrate the utility of stereoscopic visualizations to provide an intuitive description of the exact location and the relative sizes of various brain landmarks, structures and lesions. In the case of resting state fMRI, stereoscopic 3D visualization facilitated comprehension of the anatomical position of complex large-scale functional connectivity patterns. Overall, stereoscopic visualization improves the intuitive visual comprehension of image contents, and brings increased dimensionality to visualization of traditional MRI data, as well as patterns of functional connectivity.

  3. No-reference stereoscopic image quality measurement based on generalized local ternary patterns of binocular energy response

    International Nuclear Information System (INIS)

    Zhou, Wujie; Yu, Lu

    2015-01-01

    Perceptual no-reference (NR) quality measurement of stereoscopic images has become a challenging issue in three-dimensional (3D) imaging fields. In this article, we propose an efficient binocular quality-aware features extraction scheme, namely generalized local ternary patterns (GLTP) of binocular energy response, for general-purpose NR stereoscopic image quality measurement (SIQM). More specifically, we first construct the binocular energy response of a distorted stereoscopic image with different stimuli of amplitude and phase shifts. Then, the binocular quality-aware features are generated from the GLTP of the binocular energy response. Finally, these features are mapped to the subjective quality score of the distorted stereoscopic image by using support vector regression. Experiments on two publicly available 3D databases confirm the effectiveness of the proposed metric compared with the state-of-the-art full reference and NR metrics. (paper)

  4. In-line phase-contrast stereoscopic X-ray imaging for radiological purposes: An initial experimental study

    International Nuclear Information System (INIS)

    Siegbahn, E.A.; Coan, P.; Zhou, S.-A.; Bravin, A.; Brahme, A.

    2011-01-01

    We report results from a pilot study in which the in-line propagation-based phase-contrast imaging technique is combined with the stereoscopic method. Two phantoms were imaged at several sample-detector distances using monochromatic, 30 keV, X-rays. High contrast- and spatial-resolution phase-contrast stereoscopic pairs of X-ray images were constructed using the anaglyph approach and a vivid stereoscopic effect was demonstrated. On the other hand, images of the same phantoms obtained with a shorter sample-to-detector distance, but otherwise the same experimental conditions (i.e. the same X-ray energy and absorbed radiation dose), corresponding to the conventional attenuation-based imaging mode, hardly revealed stereoscopic effects because of the lower image contrast produced. These results have confirmed our hypothesis that stereoscopic X-ray images of samples with objects composed of low-atomic-number elements are considerably improved if phase-contrast imaging is used. It is our belief that the high-resolution phase-contrast stereoscopic method will be a valuable new medical imaging tool for radiologists and that it will be of help to enhance the diagnostic capability in the examination of patients in future clinical practice, even though further efforts will be needed to optimize the system performance.

  5. Learning Receptive Fields and Quality Lookups for Blind Quality Assessment of Stereoscopic Images.

    Science.gov (United States)

    Shao, Feng; Lin, Weisi; Wang, Shanshan; Jiang, Gangyi; Yu, Mei; Dai, Qionghai

    2016-03-01

    Blind quality assessment of 3D images encounters more new challenges than its 2D counterparts. In this paper, we propose a blind quality assessment for stereoscopic images by learning the characteristics of receptive fields (RFs) from perspective of dictionary learning, and constructing quality lookups to replace human opinion scores without performance loss. The important feature of the proposed method is that we do not need a large set of samples of distorted stereoscopic images and the corresponding human opinion scores to learn a regression model. To be more specific, in the training phase, we learn local RFs (LRFs) and global RFs (GRFs) from the reference and distorted stereoscopic images, respectively, and construct their corresponding local quality lookups (LQLs) and global quality lookups (GQLs). In the testing phase, blind quality pooling can be easily achieved by searching optimal GRF and LRF indexes from the learnt LQLs and GQLs, and the quality score is obtained by combining the LRF and GRF indexes together. Experimental results on three publicly 3D image quality assessment databases demonstrate that in comparison with the existing methods, the devised algorithm achieves high consistent alignment with subjective assessment.

  6. Continuous monitoring of prostate position using stereoscopic and monoscopic kV image guidance

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, M. Tynan R.; Parsons, Dave D.; Robar, James L. [Department of Medical Physics, Dalhousie University, Halifax, Nova Scotia B3H 4R2, Canada and Nova Scotia Cancer Centre, QEII Health Science Centre, Halifax, Nova Scotia B3H 2Y9 (Canada)

    2016-05-15

    Purpose: To demonstrate continuous kV x-ray monitoring of prostate motion using both stereoscopic and monoscopic localizations, assess the spatial accuracy of these techniques, and evaluate the dose delivered from the added image guidance. Methods: The authors implemented both stereoscopic and monoscopic fiducial localizations using a room-mounted dual oblique x-ray system. Recently developed monoscopic 3D position estimation techniques potentially overcome the issue of treatment head interference with stereoscopic imaging at certain gantry angles. To demonstrate continuous position monitoring, a gold fiducial marker was placed in an anthropomorphic phantom and placed on the Linac couch. The couch was used as a programmable translation stage. The couch was programmed with a series of patient prostate motion trajectories exemplifying five distinct categories: stable prostate, slow drift, persistent excursion, transient excursion, and high frequency excursions. The phantom and fiducial were imaged using 140 kVp, 0.63 mAs per image at 1 Hz for a 60 s monitoring period. Both stereoscopic and monoscopic 3D localization accuracies were assessed by comparison to the ground-truth obtained from the Linac log file. Imaging dose was also assessed, using optically stimulated luminescence dosimeter inserts in the phantom. Results: Stereoscopic localization accuracy varied between 0.13 ± 0.05 and 0.33 ± 0.30 mm, depending on the motion trajectory. Monoscopic localization accuracy varied from 0.2 ± 0.1 to 1.1 ± 0.7 mm. The largest localization errors were typically observed in the left–right direction. There were significant differences in accuracy between the two monoscopic views, but which view was better varied from trajectory to trajectory. The imaging dose was measured to be between 2 and 15 μGy/mAs, depending on location in the phantom. Conclusions: The authors have demonstrated the first use of monoscopic localization for a room-mounted dual x-ray system. Three

  7. Joint optic disc and cup boundary extraction from monocular fundus images.

    Science.gov (United States)

    Chakravarty, Arunava; Sivaswamy, Jayanthi

    2017-08-01

    Accurate segmentation of optic disc and cup from monocular color fundus images plays a significant role in the screening and diagnosis of glaucoma. Though optic cup is characterized by the drop in depth from the disc boundary, most existing methods segment the two structures separately and rely only on color and vessel kink based cues due to the lack of explicit depth information in color fundus images. We propose a novel boundary-based Conditional Random Field formulation that extracts both the optic disc and cup boundaries in a single optimization step. In addition to the color gradients, the proposed method explicitly models the depth which is estimated from the fundus image itself using a coupled, sparse dictionary trained on a set of image-depth map (derived from Optical Coherence Tomography) pairs. The estimated depth achieved a correlation coefficient of 0.80 with respect to the ground truth. The proposed segmentation method outperformed several state-of-the-art methods on five public datasets. The average dice coefficient was in the range of 0.87-0.97 for disc segmentation across three datasets and 0.83 for cup segmentation on the DRISHTI-GS1 test set. The method achieved a good glaucoma classification performance with an average AUC of 0.85 for five fold cross-validation on RIM-ONE v2. We propose a method to jointly segment the optic disc and cup boundaries by modeling the drop in depth between the two structures. Since our method requires a single fundus image per eye during testing it can be employed in the large-scale screening of glaucoma where expensive 3D imaging is unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Analysis of scene distortions in stereoscopic images due to the variation of the ideal viewing conditions

    Science.gov (United States)

    Viale, Alberto; Villa, Dario

    2011-03-01

    Recently stereoscopy has increased a lot its popularity and various technologies are spreading in theaters and homes allowing observation of stereoscopic images and movies, becoming affordable even for home users. However there are some golden rules that users should follow to ensure a better enjoyment of stereoscopic images, first of all the viewing condition should not be too different from the ideal ones, which were assumed during the production process. To allow the user to perceive stereo depth instead of a flat image, two different views of the same scene are shown to the subject, one is seen just through his left eye and the other just through the right one; the vision process is making the work of merging the two images in a virtual three-dimensional scene, giving to the user the perception of depth. The two images presented to the user were created, either from image synthesis or from more traditional techniques, following the rules of perspective. These rules need some boundary conditions to be explicit, such as eye separation, field of view, parallax distance, viewer position and orientation. In this paper we are interested in studying how the variation of the viewer position and orientation from the ideal ones expressed as specified parameters in the image creation process, is affecting the correctness of the reconstruction of the three-dimensional virtual scene.

  9. Full aperture imaging with stereoscopic properties in nuclear medicine

    International Nuclear Information System (INIS)

    Strocovsky, Sergio G.; Otero, D.

    2011-01-01

    The imaging techniques based on gamma camera (CG) and used in nuclear medicine have low spatial resolution and low sensitivity due to the use of the collimator. However, this element is essential for the formation of images in CG. The aim of this work is to show the principles of a new technique to overcome the limitations of existing techniques based on CG. Here, we present a Full Aperture Imaging (FAI) technique which is based on the edge-encoding of gamma radiation and differential detection. It takes advantage of the fact that gamma radiation is spatially incoherent. The mathematical principles and the method of images reconstruction with the new proposed technique are explained in detail. The FAI technique is tested by means of Monte Carlo simulations with filiform and spherical sources. The results show that FAI technique has greater sensitivity (>100 times) and greater spatial resolution (>2.6 times) than that of GC with LEHR collimator, in both cases, with and without attenuating material and long and short-distance configurations. The FAI decoding algorithm reconstructs simultaneously four different projections which are located in separate image fields on the detector plane, while GC produces only one projection per acquisition. Simulations have allowed comparison of both techniques under ideal identical conditions. Our results show it is possible to apply an extremely simple encoded imaging technique, and get three-dimensional radioactivity information for simplistic geometry sources. The results are promising enough to evaluate the possibility of future research with more complex sources typical of nuclear medicine imaging. (author)

  10. Robot Navigation Control Based on Monocular Images: An Image Processing Algorithm for Obstacle Avoidance Decisions

    Directory of Open Access Journals (Sweden)

    William Benn

    2012-01-01

    Full Text Available This paper covers the use of monocular vision to control autonomous navigation for a robot in a dynamically changing environment. The solution focused on using colour segmentation against a selected floor plane to distinctly separate obstacles from traversable space: this is then supplemented with canny edge detection to separate similarly coloured boundaries to the floor plane. The resulting binary map (where white identifies an obstacle-free area and black identifies an obstacle could then be processed by fuzzy logic or neural networks to control the robot’s next movements. Findings show that the algorithm performed strongly on solid coloured carpets, wooden, and concrete floors but had difficulty in separating colours in multicoloured floor types such as patterned carpets.

  11. Poster - 48: Clinical assessment of ExacTrac stereoscopic imaging of spine alignment for lung SBRT

    International Nuclear Information System (INIS)

    Sattarivand, Mike; Summers, Clare; Robar, James

    2016-01-01

    Purpose: To evaluate the validity of using spine as a surrogate for tumor positioning with ExacTrac stereoscopic imaging in lung stereotactic body radiation therapy (SBRT). Methods: Using the Novalis ExacTrac x-ray system, 39 lung SBRT patients (182 treatments) were aligned before treatment with 6 degrees (6D) of freedom couch (3 translations, 3 rotations) based on spine matching on stereoscopic images. The couch was shifted to treatment isocenter and pre-treatment CBCT was performed based on a soft tissue match around tumor volume. The CBCT data were used to measure residual errors following ExacTrac alignment. The thresholds for re-aligning the patients based on CBCT were 3mm shift or 3° rotation (in any 6D). In order to evaluate the effect of tumor location on residual errors, correlations between tumor distance from spine and individual residual errors were calculated. Results: Residual errors were up to 0.5±2.4mm. Using 3mm/3° thresholds, 80/182 (44%) of the treatments required re-alignment based on CBCT soft tissue matching following ExacTrac spine alignment. Most mismatches were in sup-inf, ant-post, and roll directions which had larger standard deviations. No correlation was found between tumor distance from spine and individual residual errors. Conclusion: ExacTrac stereoscopic imaging offers a quick pre-treatment patient alignment. However, bone matching based on spine is not reliable for aligning lung SBRT patients who require soft tissue image registration from CBCT. Spine can be a poor surrogate for lung SBRT patient alignment even for proximal tumor volumes.

  12. Poster - 48: Clinical assessment of ExacTrac stereoscopic imaging of spine alignment for lung SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Sattarivand, Mike; Summers, Clare; Robar, James [Nova Scotia Cancer Centre, Nova Scotia Cancer Centre, Nova Scotia Cancer Centre (Canada)

    2016-08-15

    Purpose: To evaluate the validity of using spine as a surrogate for tumor positioning with ExacTrac stereoscopic imaging in lung stereotactic body radiation therapy (SBRT). Methods: Using the Novalis ExacTrac x-ray system, 39 lung SBRT patients (182 treatments) were aligned before treatment with 6 degrees (6D) of freedom couch (3 translations, 3 rotations) based on spine matching on stereoscopic images. The couch was shifted to treatment isocenter and pre-treatment CBCT was performed based on a soft tissue match around tumor volume. The CBCT data were used to measure residual errors following ExacTrac alignment. The thresholds for re-aligning the patients based on CBCT were 3mm shift or 3° rotation (in any 6D). In order to evaluate the effect of tumor location on residual errors, correlations between tumor distance from spine and individual residual errors were calculated. Results: Residual errors were up to 0.5±2.4mm. Using 3mm/3° thresholds, 80/182 (44%) of the treatments required re-alignment based on CBCT soft tissue matching following ExacTrac spine alignment. Most mismatches were in sup-inf, ant-post, and roll directions which had larger standard deviations. No correlation was found between tumor distance from spine and individual residual errors. Conclusion: ExacTrac stereoscopic imaging offers a quick pre-treatment patient alignment. However, bone matching based on spine is not reliable for aligning lung SBRT patients who require soft tissue image registration from CBCT. Spine can be a poor surrogate for lung SBRT patient alignment even for proximal tumor volumes.

  13. Surface topography characterization using 3D stereoscopic reconstruction of SEM images

    Science.gov (United States)

    Vedantha Krishna, Amogh; Flys, Olena; Reddy, Vijeth V.; Rosén, B. G.

    2018-06-01

    A major drawback of the optical microscope is its limitation to resolve finer details. Many microscopes have been developed to overcome the limitations set by the diffraction of visible light. The scanning electron microscope (SEM) is one such alternative: it uses electrons for imaging, which have much smaller wavelength than photons. As a result high magnification with superior image resolution can be achieved. However, SEM generates 2D images which provide limited data for surface measurements and analysis. Often many research areas require the knowledge of 3D structures as they contribute to a comprehensive understanding of microstructure by allowing effective measurements and qualitative visualization of the samples under study. For this reason, stereo photogrammetry technique is employed to convert SEM images into 3D measurable data. This paper aims to utilize a stereoscopic reconstruction technique as a reliable method for characterization of surface topography. Reconstructed results from SEM images are compared with coherence scanning interferometer (CSI) results obtained by measuring a roughness reference standard sample. This paper presents a method to select the most robust/consistent surface texture parameters that are insensitive to the uncertainties involved in the reconstruction technique itself. Results from the two-stereoscopic reconstruction algorithms are also documented in this paper.

  14. Matching methods evaluation framework for stereoscopic breast x-ray images.

    Science.gov (United States)

    Rousson, Johanna; Naudin, Mathieu; Marchessoux, Cédric

    2016-01-01

    Three-dimensional (3-D) imaging has been intensively studied in the past few decades. Depth information is an important added value of 3-D systems over two-dimensional systems. Special focuses were devoted to the development of stereo matching methods for the generation of disparity maps (i.e., depth information within a 3-D scene). Dedicated frameworks were designed to evaluate and rank the performance of different stereo matching methods but never considering x-ray medical images. Yet, 3-D x-ray acquisition systems and 3-D medical displays have already been introduced into the diagnostic market. To access the depth information within x-ray stereoscopic images, computing accurate disparity maps is essential. We aimed at developing a framework dedicated to x-ray stereoscopic breast images used to evaluate and rank several stereo matching methods. A multiresolution pyramid optimization approach was integrated to the framework to increase the accuracy and the efficiency of the stereo matching techniques. Finally, a metric was designed to score the results of the stereo matching compared with the ground truth. Eight methods were evaluated and four of them [locally scaled sum of absolute differences (LSAD), zero mean sum of absolute differences, zero mean sum of squared differences, and locally scaled mean sum of squared differences] appeared to perform equally good with an average error score of 0.04 (0 is the perfect matching). LSAD was selected for generating the disparity maps.

  15. A foreground object features-based stereoscopic image visual comfort assessment model

    Science.gov (United States)

    Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.

    2014-11-01

    Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.

  16. Stereoscopic Planar Laser-Induced Fluorescence Imaging at 500 kHz

    Science.gov (United States)

    Medford, Taylor L.; Danehy, Paul M.; Jones, Stephen B.; Jiang, N.; Webster, M.; Lempert, Walter; Miller, J.; Meyer, T.

    2011-01-01

    A new measurement technique for obtaining time- and spatially-resolved image sequences in hypersonic flows is developed. Nitric-oxide planar laser-induced fluorescence (NO PLIF) has previously been used to investigate transition from laminar to turbulent flow in hypersonic boundary layers using both planar and volumetric imaging capabilities. Low flow rates of NO were typically seeded into the flow, minimally perturbing the flow. The volumetric imaging was performed at a measurement rate of 10 Hz using a thick planar laser sheet that excited NO fluorescence. The fluorescence was captured by a pair of cameras having slightly different views of the flow. Subsequent stereoscopic reconstruction of these images allowed the three-dimensional flow structures to be viewed. In the current paper, this approach has been extended to 50,000 times higher repetition rates. A laser operating at 500 kHz excites the seeded NO molecules, and a camera, synchronized with the laser and fitted with a beam-splitting assembly, acquires two separate images of the flow. The resulting stereoscopic images provide three-dimensional flow visualizations at 500 kHz for the first time. The 200 ns exposure time in each frame is fast enough to freeze the flow while the 500 kHz repetition rate is fast enough to time-resolve changes in the flow being studied. This method is applied to visualize the evolving hypersonic flow structures that propagate downstream of a discrete protuberance attached to a flat plate. The technique was demonstrated in the NASA Langley Research Center s 31-Inch Mach 10 Air Tunnel facility. Different tunnel Reynolds number conditions, NO flow rates and two different cylindrical protuberance heights were investigated. The location of the onset of flow unsteadiness, an indicator of transition, was observed to move downstream during the tunnel runs, coinciding with an increase in the model temperature.

  17. An interactive, stereoscopic virtual environment for medical imaging visualization, simulation and training

    Science.gov (United States)

    Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel

    2017-03-01

    Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic

  18. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  19. Assessment of stereoscopic optic disc images using an autostereoscopic screen – experimental study

    Directory of Open Access Journals (Sweden)

    Vaideanu Daniella

    2008-07-01

    Full Text Available Abstract Background Stereoscopic assessment of the optic disc morphology is an important part of the care of patients with glaucoma. The aim of this study was to assess stereoviewing of stereoscopic optic disc images using an example of the new technology of autostereoscopic screens compared to the liquid shutter goggles. Methods Independent assessment of glaucomatous disc characteristics and measurement of optic disc and cup parameters whilst using either an autostereoscopic screen or liquid crystal shutter goggles synchronized with a view switching display. The main outcome measures were inter-modality agreements between the two used modalities as evaluated by the weighted kappa test and Bland Altman plots. Results Inter-modality agreement for measuring optic disc parameters was good [Average kappa coefficient for vertical Cup/Disc ratio was 0.78 (95% CI 0.62–0.91 and 0.81 (95% CI 0.6–0.92 for observer 1 and 2 respectively]. Agreement between modalities for assessing optic disc characteristics for glaucoma on a five-point scale was very good with a kappa value of 0.97. Conclusion This study compared two different methods of stereo viewing. The results of assessment of the different optic disc and cup parameters were comparable using an example of the newly developing autostereoscopic display technologies as compared to the shutter goggles system used. The Inter-modality agreement was high. This new technology carries potential clinical usability benefits in different areas of ophthalmic practice.

  20. 3D pressure imaging of an aircraft propeller blade-tip flow by phase-locked stereoscopic PIV

    NARCIS (Netherlands)

    Ragni, D.; Van Oudheusden, B.W.; Scarano, F.

    2011-01-01

    The flow field at the tip region of a scaled DHC Beaver aircraft propeller, running at transonic speed, has been investigated by means of a multi-plane stereoscopic particle image velocimetry setup. Velocity fields, phase-locked with the blade rotational motion, are acquired across several planes

  1. Tissue feature-based intra-fractional motion tracking for stereoscopic x-ray image guided radiotherapy

    Science.gov (United States)

    Xie, Yaoqin; Xing, Lei; Gu, Jia; Liu, Wu

    2013-06-01

    Real-time knowledge of tumor position during radiation therapy is essential to overcome the adverse effect of intra-fractional organ motion. The goal of this work is to develop a tumor tracking strategy by effectively utilizing the inherent image features of stereoscopic x-ray images acquired during dose delivery. In stereoscopic x-ray image guided radiation delivery, two orthogonal x-ray images are acquired either simultaneously or sequentially. The essence of markerless tumor tracking is the reliable identification of inherent points with distinct tissue features on each projection image and their association between two images. The identification of the feature points on a planar x-ray image is realized by searching for points with high intensity gradient. The feature points are associated by using the scale invariance features transform descriptor. The performance of the proposed technique is evaluated by using images of a motion phantom and four archived clinical cases acquired using either a CyberKnife equipped with a stereoscopic x-ray imaging system, or a LINAC equipped with an onboard kV imager and an electronic portal imaging device. In the phantom study, the results obtained using the proposed method agree with the measurements to within 2 mm in all three directions. In the clinical study, the mean error is 0.48 ± 0.46 mm for four patient data with 144 sequential images. In this work, a tissue feature-based tracking method for stereoscopic x-ray image guided radiation therapy is developed. The technique avoids the invasive procedure of fiducial implantation and may greatly facilitate the clinical workflow.

  2. Tissue feature-based intra-fractional motion tracking for stereoscopic x-ray image guided radiotherapy

    International Nuclear Information System (INIS)

    Xie Yaoqin; Gu Jia; Xing Lei; Liu Wu

    2013-01-01

    Real-time knowledge of tumor position during radiation therapy is essential to overcome the adverse effect of intra-fractional organ motion. The goal of this work is to develop a tumor tracking strategy by effectively utilizing the inherent image features of stereoscopic x-ray images acquired during dose delivery. In stereoscopic x-ray image guided radiation delivery, two orthogonal x-ray images are acquired either simultaneously or sequentially. The essence of markerless tumor tracking is the reliable identification of inherent points with distinct tissue features on each projection image and their association between two images. The identification of the feature points on a planar x-ray image is realized by searching for points with high intensity gradient. The feature points are associated by using the scale invariance features transform descriptor. The performance of the proposed technique is evaluated by using images of a motion phantom and four archived clinical cases acquired using either a CyberKnife equipped with a stereoscopic x-ray imaging system, or a LINAC equipped with an onboard kV imager and an electronic portal imaging device. In the phantom study, the results obtained using the proposed method agree with the measurements to within 2 mm in all three directions. In the clinical study, the mean error is 0.48 ± 0.46 mm for four patient data with 144 sequential images. In this work, a tissue feature-based tracking method for stereoscopic x-ray image guided radiation therapy is developed. The technique avoids the invasive procedure of fiducial implantation and may greatly facilitate the clinical workflow. (paper)

  3. Stereoscopic particle image velocimetry investigations of the mixed convection exchange flow through a horizontal vent

    Science.gov (United States)

    Varrall, Kevin; Pretrel, Hugues; Vaux, Samuel; Vauquelin, Olivier

    2017-10-01

    The exchange flow through a horizontal vent linking two compartments (one above the other) is studied experimentally. This exchange is here governed by both the buoyant natural effect due to the temperature difference of the fluids in both compartments, and the effect of a (forced) mechanical ventilation applied in the lower compartment. Such a configuration leads to uni- or bi-directional flows through the vent. In the experiments, buoyancy is induced in the lower compartment thanks to an electrical resistor. The forced ventilation is applied in exhaust or supply modes and three different values of the vent area. To estimate both velocity fields and flow rates at the vent, measurements are realized at thermal steady state, flush the vent in the upper compartment using stereoscopic particle image velocimetry (SPIV), which is original for this kind of flow. The SPIV measurements allows the area occupied by both upward and downward flows to be determined.

  4. Objective quality assessment of stereoscopic images with vertical disparity using EEG

    Science.gov (United States)

    Shahbazi Avarvand, Forooz; Bosse, Sebastian; Müller, Klaus-Robert; Schäfer, Ralf; Nolte, Guido; Wiegand, Thomas; Curio, Gabriel; Samek, Wojciech

    2017-08-01

    Objective. Neurophysiological correlates of vertical disparity in 3D images are studied in an objective approach using EEG technique. These disparities are known to negatively affect the quality of experience and to cause visual discomfort in stereoscopic visualizations. Approach. We have presented four conditions to subjects: one in 2D and three conditions in 3D, one without vertical disparity and two with different vertical disparity levels. Event related potentials (ERPs) are measured for each condition and the differences between ERP components are studied. Analysis is also performed on the induced potentials in the time frequency domain. Main results. Results show that there is a significant increase in the amplitude of P1 components in 3D conditions in comparison to 2D. These results are consistent with previous studies which have shown that P1 amplitude increases due to the depth perception in 3D compared to 2D. However the amplitude is significantly smaller for maximum vertical disparity (3D-3) in comparison to 3D with no vertical disparity. Our results therefore suggest that the vertical disparity in 3D-3 condition decreases the perception of depth compared to other 3D conditions and the amplitude of P1 component can be used as a discriminative feature. Significance. The results show that the P1 component increases in amplitude due to the depth perception in the 3D stimuli compared to the 2D stimulus. On the other hand the vertical disparity in the stereoscopic images is studied here. We suggest that the amplitude of P1 component is modulated with this parameter and decreases due to the decrease in the perception of depth.

  5. Full-reference quality assessment of stereoscopic images by learning binocular receptive field properties.

    Science.gov (United States)

    Shao, Feng; Li, Kemeng; Lin, Weisi; Jiang, Gangyi; Yu, Mei; Dai, Qionghai

    2015-10-01

    Quality assessment of 3D images encounters more challenges than its 2D counterparts. Directly applying 2D image quality metrics is not the solution. In this paper, we propose a new full-reference quality assessment for stereoscopic images by learning binocular receptive field properties to be more in line with human visual perception. To be more specific, in the training phase, we learn a multiscale dictionary from the training database, so that the latent structure of images can be represented as a set of basis vectors. In the quality estimation phase, we compute sparse feature similarity index based on the estimated sparse coefficient vectors by considering their phase difference and amplitude difference, and compute global luminance similarity index by considering luminance changes. The final quality score is obtained by incorporating binocular combination based on sparse energy and sparse complexity. Experimental results on five public 3D image quality assessment databases demonstrate that in comparison with the most related existing methods, the devised algorithm achieves high consistency with subjective assessment.

  6. Time Dependence of Intrafraction Patient Motion Assessed by Repeat Stereoscopic Imaging

    International Nuclear Information System (INIS)

    Hoogeman, Mischa S.; Nuyttens, Joost J.; Levendag, Peter C.; Heijmen, Ben J.M.

    2008-01-01

    Purpose: To quantify intrafraction patient motion and its time dependence in immobilized intracranial and extracranial patients. The data can be used to optimize the intrafraction imaging frequency and consequent patient setup correction with an image guidance and tracking system, and to establish the required safety margins in the absence of such a system. Method and Materials: The intrafraction motion of 32 intracranial patients, immobilized with a thermoplastic mask, and 11 supine- and 14 prone-treated extracranial spine patients, immobilized with a vacuum bag, were analyzed. The motion was recorded by an X-ray, stereoscopic, image-guidance system. For each group, we calculated separately the systematic (overall mean and SD) and the random displacement as a function of elapsed intrafraction time. Results: The SD of the systematic intrafraction displacements increased linearly over time for all three patient groups. For intracranial-, supine-, and prone-treated patients, the SD increased to 0.8, 1.2, and 2.2 mm, respectively, in a period of 15 min. The random displacements for the prone-treated patients were significantly higher than for the other groups, namely 1.6 mm (1 SD), probably caused by respiratory motion. Conclusions: Despite the applied immobilization devices, patients drift away from their initial position during a treatment fraction. These drifts are in general small if compared with conventional treatment margins, but will significantly contribute to the margin for high-precision radiation treatments with treatment times of 15 min or longer

  7. The Advanced Gamma-ray Imaging System (AGIS): Real Time Stereoscopic Array Trigger

    Science.gov (United States)

    Byrum, K.; Anderson, J.; Buckley, J.; Cundiff, T.; Dawson, J.; Drake, G.; Duke, C.; Haberichter, B.; Krawzcynski, H.; Krennrich, F.; Madhavan, A.; Schroedter, M.; Smith, A.

    2009-05-01

    Future large arrays of Imaging Atmospheric Cherenkov telescopes (IACTs) such as AGIS and CTA are conceived to comprise of 50 - 100 individual telescopes each having a camera with 10**3 to 10**4 pixels. To maximize the capabilities of such IACT arrays with a low energy threshold, a wide field of view and a low background rate, a sophisticated array trigger is required. We describe the design of a stereoscopic array trigger that calculates image parameters and then correlates them across a subset of telescopes. Fast Field Programmable Gate Array technology allows to use lookup tables at the array trigger level to form a real-time pattern recognition trigger tht capitalizes on the multiple view points of the shower at different shower core distances. A proof of principle system is currently under construction. It is based on 400 MHz FPGAs and the goal is for camera trigger rates of up to 10 MHz and a tunable cosmic-ray background suppression at the array level.

  8. 3-D Digitization of Stereoscopic Jet-in-Crossflow Vortex Structure Images via Augmented Reality

    Science.gov (United States)

    Sigurdson, Lorenz; Strand, Christopher; Watson, Graeme; Nault, Joshua; Tucker, Ryan

    2006-11-01

    Stereoscopic images of smoke-laden vortex flows have proven useful for understanding the topology of the embedded 3-D vortex structures. Images from two cameras allow a perception of the 3-D structure via the use of red/blue eye glasses. The human brain has an astonishing capacity to calculate and present to the observer the complex turbulent smoke volume. We have developed a technique whereby a virtual cursor is introduced to the perception, which creates an ``augmented reality.'' The perceived position of this cursor in the 3-D field can be precisely controlled by the observer. It can be brought near a characteristic vortex structure in order to digitally estimate the spatial coordinates of that feature. A calibration procedure accounts for camera positioning. Vortex tubes can be traced and recorded for later or real time supersposition of tube skeleton models. These models can be readily digitally obtained for display in graphics systems to allow complete exploration from any location or perspective. A unique feature of this technology is the use of the human brain to naturally perform the difficult computation of the shape of the translucent smoke volume. Examples are given of application to low velocity ratio and Reynolds number elevated jets-in-crossflow.

  9. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    Science.gov (United States)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  10. Estimated Prevalence of Monocular Blindness and Monocular ...

    African Journals Online (AJOL)

    with MB/MSVI; among the 109 (51%) children with MB/MSVI that had a known etiology, trauma. Table 1: Major anatomical site of monocular blindness and monocular severe visual impairment in children. Anatomical cause. Total (%). Corneal scar. 89 (42). Whole globe. 43 (20). Lens. 42 (19). Amblyopia. 16 (8). Retina. 9 (4).

  11. Partially converted stereoscopic images and the effects on visual attention and memory

    Science.gov (United States)

    Kim, Sanghyun; Morikawa, Hiroyuki; Mitsuya, Reiko; Kawai, Takashi; Watanabe, Katsumi

    2015-03-01

    This study contained two experimental examinations of the cognitive activities such as visual attention and memory in viewing stereoscopic (3D) images. For this study, partially converted 3D images were used with binocular parallax added to a specific region of the image. In Experiment 1, change blindness was used as a presented stimulus. The visual attention and impact on memory were investigated by measuring the response time to accomplish the given task. In the change blindness task, an 80 ms blank was intersected between the original and altered images, and the two images were presented alternatingly for 240 ms each. Subjects were asked to temporarily memorize the two switching images and to compare them, visually recognizing the difference between the two. The stimuli for four conditions (2D, 3D, Partially converted 3D, distracted partially converted 3D) were randomly displayed for 20 subjects. The results of Experiment 1 showed that partially converted 3D images tend to attract visual attention and are prone to remain in viewer's memory in the area where moderate negative parallax has been added. In order to examine the impact of a dynamic binocular disparity on partially converted 3D images, an evaluation experiment was conducted that applied learning, distraction, and recognition tasks for 33 subjects. The learning task involved memorizing the location of cells in a 5 × 5 matrix pattern using two different colors. Two cells were positioned with alternating colors, and one of the gray cells was moved up, down, left, or right by one cell width. Experimental conditions was set as a partially converted 3D condition in which a gray cell moved diagonally for a certain period of time with a dynamic binocular disparity added, a 3D condition in which binocular disparity was added to all gray cells, and a 2D condition. The correct response rates for recognition of each task after the distraction task were compared. The results of Experiment 2 showed that the correct

  12. Turbulent Structure of a Simplified Urban Fluid Flow Studied Through Stereoscopic Particle Image Velocimetry

    Science.gov (United States)

    Monnier, Bruno; Goudarzi, Sepehr A.; Vinuesa, Ricardo; Wark, Candace

    2018-02-01

    Stereoscopic particle image velocimetry was used to provide a three-dimensional characterization of the flow around a simplified urban model defined by a 5 by 7 array of blocks, forming four parallel streets, perpendicular to the incoming wind direction corresponding to a zero angle of incidence. Channeling of the flow through the array under consideration was observed, and its effect increased as the incoming wind direction, or angle of incidence ( AOI), was changed from 0° to 15°, 30°, and 45°. The flow between blocks can be divided into two regions: a region of low turbulence kinetic energy (TKE) levels close to the leeward side of the upstream block, and a high TKE area close to the downstream block. The centre of the arch vortex is located in the low TKE area, and two regions of large streamwise velocity fluctuation bound the vortex in the spanwise direction. Moreover, a region of large spanwise velocity fluctuation on the downstream block is found between the vortex legs. Our results indicate that the reorientation of the arch vortex at increasing AOI is produced by the displacement of the different TKE regions and their interaction with the shear layers on the sides and top of the upstream and downstream blocks, respectively. There is also a close connection between the turbulent structure between the blocks and the wind gusts. The correlations among gust components were also studied, and it was found that in the near-wall region of the street the correlations between the streamwise and spanwise gusts R_{uv} were dominant for all four AOI cases. At higher wall-normal positions in the array, the R_{uw} correlation decreased with increasing AOI, whereas the R_{uv} coefficient increased as AOI increased, and at {it{AOI}}=45° all three correlations exhibited relatively high values of around 0.4.

  13. Monocular depth effects on perceptual fading.

    Science.gov (United States)

    Hsu, Li-Chuan; Kramer, Peter; Yeh, Su-Ling

    2010-08-06

    After prolonged viewing, a static target among moving non-targets is perceived to repeatedly disappear and reappear. An uncrossed stereoscopic disparity of the target facilitates this Motion-Induced Blindness (MIB). Here we test whether monocular depth cues can affect MIB too, and whether they can also affect perceptual fading in static displays. Experiment 1 reveals an effect of interposition: more MIB when the target appears partially covered by, than when it appears to cover, its surroundings. Experiment 2 shows that the effect is indeed due to interposition and not to the target's contours. Experiment 3 induces depth with the watercolor illusion and replicates Experiment 1. Experiments 4 and 5 replicate Experiments 1 and 3 without the use of motion. Since almost any stimulus contains a monocular depth cue, we conclude that perceived depth affects perceptual fading in almost any stimulus, whether dynamic or static. Copyright 2010 Elsevier Ltd. All rights reserved.

  14. Polarizing aperture stereoscopic cinema camera

    Science.gov (United States)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  15. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience

    Science.gov (United States)

    Hanhart, Philippe; Ebrahimi, Touradj

    2014-03-01

    Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.

  16. Stereoscopic three-dimensional images of an anatomical dissection of the eyeball and orbit for educational purposes.

    Science.gov (United States)

    Matsuo, Toshihiko; Takeda, Yoshimasa; Ohtsuka, Aiji

    2013-01-01

    The purpose of this study was to develop a series of stereoscopic anatomical images of the eye and orbit for use in the curricula of medical schools and residency programs in ophthalmology and other specialties. Layer-by-layer dissection of the eyelid, eyeball, and orbit of a cadaver was performed by an ophthalmologist. A stereoscopic camera system was used to capture a series of anatomical views that were scanned in a panoramic three-dimensional manner around the center of the lid fissure. The images could be rotated 360 degrees in the frontal plane and the angle of views could be tilted up to 90 degrees along the anteroposterior axis perpendicular to the frontal plane around the 360 degrees. The skin, orbicularis oculi muscle, and upper and lower tarsus were sequentially observed. The upper and lower eyelids were removed to expose the bulbar conjunctiva and to insert three 25-gauge trocars for vitrectomy at the location of the pars plana. The cornea was cut at the limbus, and the lens with mature cataract was dislocated. The sclera was cut to observe the trocars from inside the eyeball. The sclera was further cut to visualize the superior oblique muscle with the trochlea and the inferior oblique muscle. The eyeball was dissected completely to observe the optic nerve and the ophthalmic artery. The thin bones of the medial and inferior orbital wall were cracked with a forceps to expose the ethmoid and maxillary sinus, respectively. In conclusion, the serial dissection images visualized aspects of the local anatomy specific to various procedures, including the levator muscle and tarsus for blepharoptosis surgery, 25-gauge trocars as viewed from inside the eye globe for vitrectomy, the oblique muscles for strabismus surgery, and the thin medial and inferior orbital bony walls for orbital bone fractures.

  17. Preliminary evaluation of a prototype stereoscopic a-Si:H-based X-ray imaging system for full-field digital mammography

    International Nuclear Information System (INIS)

    Darambara, D.G.; Speller, R.D.; Horrocks, J.A.; Godber, S.; Wilson, R.; Hanby, A.

    2001-01-01

    In a pre-clinical study, we have been investigating the potential of a-Si:H active matrix, flat panel imagers for X-ray full-field digital mammography through the development of an advanced 3D X-ray imaging system and have measured a number of their important imaging characteristics. To enhance the information embodied into the digital images produced by the a-Si array, stereoscopic images, created by viewing the object under examination from two angles and recombining the images, were obtained. This method provided us with a full 3D X-ray image of the test object as well as left and right perspective 2D images all at the same time. Within this scope, images of fresh, small human breast tissue specimens--normal and diseased--were obtained at ±2 deg., processed and stereoscopically displayed for a pre-clinical evaluation by radiologists. It was demonstrated that the stereoscopic presentation of the images provides important additional information and has potential benefits over the more traditional 2D data

  18. GEOMETRIC AND REFLECTANCE SIGNATURE CHARACTERIZATION OF COMPLEX CANOPIES USING HYPERSPECTRAL STEREOSCOPIC IMAGES FROM UAV AND TERRESTRIAL PLATFORMS

    Directory of Open Access Journals (Sweden)

    E. Honkavaara

    2016-06-01

    Full Text Available Light-weight hyperspectral frame cameras represent novel developments in remote sensing technology. With frame camera technology, when capturing images with stereoscopic overlaps, it is possible to derive 3D hyperspectral reflectance information and 3D geometric data of targets of interest, which enables detailed geometric and radiometric characterization of the object. These technologies are expected to provide efficient tools in various environmental remote sensing applications, such as canopy classification, canopy stress analysis, precision agriculture, and urban material classification. Furthermore, these data sets enable advanced quantitative, physical based retrieval of biophysical and biochemical parameters by model inversion technologies. Objective of this investigation was to study the aspects of capturing hyperspectral reflectance data from unmanned airborne vehicle (UAV and terrestrial platform with novel hyperspectral frame cameras in complex, forested environment.

  19. Extended two-photon microscopy in live samples with Bessel beams: steadier focus, faster volume scans, and simpler stereoscopic imaging.

    Science.gov (United States)

    Thériault, Gabrielle; Cottet, Martin; Castonguay, Annie; McCarthy, Nathalie; De Koninck, Yves

    2014-01-01

    Two-photon microscopy has revolutionized functional cellular imaging in tissue, but although the highly confined depth of field (DOF) of standard set-ups yields great optical sectioning, it also limits imaging speed in volume samples and ease of use. For this reason, we recently presented a simple and retrofittable modification to the two-photon laser-scanning microscope which extends the DOF through the use of an axicon (conical lens). Here we demonstrate three significant benefits of this technique using biological samples commonly employed in the field of neuroscience. First, we use a sample of neurons grown in culture and move it along the z-axis, showing that a more stable focus is achieved without compromise on transverse resolution. Second, we monitor 3D population dynamics in an acute slice of live mouse cortex, demonstrating that faster volumetric scans can be conducted. Third, we acquire a stereoscopic image of neurons and their dendrites in a fixed sample of mouse cortex, using only two scans instead of the complete stack and calculations required by standard systems. Taken together, these advantages, combined with the ease of integration into pre-existing systems, make the extended depth-of-field imaging based on Bessel beams a strong asset for the field of microscopy and life sciences in general.

  20. Crosstalk evaluation in stereoscopic displays

    NARCIS (Netherlands)

    Wang, L.; Teunissen, C.; Tu, Yan; Chen, Li; Zhang, P.; Zhang, T.; Heynderickx, I.E.J.

    2011-01-01

    Substantial progress in liquid-crystal display and polarization film technology has enabled several types of stereoscopic displays. Despite all progress, some image distortions still exist in these 3-D displays, of which interocular crosstalk - light leakage of the image for one eye to the other eye

  1. Stereoscopic-3D display design: a new paradigm with Intel Adaptive Stable Image Technology [IA-SIT

    Science.gov (United States)

    Jain, Sunil

    2012-03-01

    Stereoscopic-3D (S3D) proliferation on personal computers (PC) is mired by several technical and business challenges: a) viewing discomfort due to cross-talk amongst stereo images; b) high system cost; and c) restricted content availability. Users expect S3D visual quality to be better than, or at least equal to, what they are used to enjoying on 2D in terms of resolution, pixel density, color, and interactivity. Intel Adaptive Stable Image Technology (IA-SIT) is a foundational technology, successfully developed to resolve S3D system design challenges and deliver high quality 3D visualization at PC price points. Optimizations in display driver, panel timing firmware, backlight hardware, eyewear optical stack, and synch mechanism combined can help accomplish this goal. Agnostic to refresh rate, IA-SIT will scale with shrinking of display transistors and improvements in liquid crystal and LED materials. Industry could profusely benefit from the following calls to action:- 1) Adopt 'IA-SIT S3D Mode' in panel specs (via VESA) to help panel makers monetize S3D; 2) Adopt 'IA-SIT Eyewear Universal Optical Stack' and algorithm (via CEA) to help PC peripheral makers develop stylish glasses; 3) Adopt 'IA-SIT Real Time Profile' for sub-100uS latency control (via BT Sig) to extend BT into S3D; and 4) Adopt 'IA-SIT Architecture' for Monitors and TVs to monetize via PC attach.

  2. Capturing the added value of three-dimensional television : viewing experience and naturalness of stereoscopic images

    NARCIS (Netherlands)

    Seuntiëns, P.J.H.; Heynderickx, I.E.J.; IJsselsteijn, W.A.

    2008-01-01

    The term "image quality" is often used to describe the performance of an imaging system. Recent research showed however that image quality may not be the most appropriate term to capture the evaluative processes associated with experiencing three-dimensional (3D) images. The added value of depth in

  3. 3D pressure imaging of an aircraft propeller blade-tip flow by phase-locked stereoscopic PIV

    Energy Technology Data Exchange (ETDEWEB)

    Ragni, D.; Oudheusden, B.W. van; Scarano, F. [Delft University of Technology, Faculty of Aerospace Engineering, Delft (Netherlands)

    2012-02-15

    The flow field at the tip region of a scaled DHC Beaver aircraft propeller, running at transonic speed, has been investigated by means of a multi-plane stereoscopic particle image velocimetry setup. Velocity fields, phase-locked with the blade rotational motion, are acquired across several planes perpendicular to the blade axis and merged to form a 3D measurement volume. Transonic conditions have been reached at the tip region, with a revolution frequency of 19,800 rpm and a relative free-stream Mach number of 0.73 at the tip. The pressure field and the surface pressure distribution are inferred from the 3D velocity data through integration of the momentum Navier-Stokes equation in differential form, allowing for the simultaneous flow visualization and the aerodynamic loads computation, with respect to a reference frame moving with the blade. The momentum and pressure data are further integrated by means of a contour-approach to yield the aerodynamic sectional force components as well as the blade torsional moment. A steady Reynolds averaged Navier-Stokes numerical simulation of the entire propeller model has been used for comparison to the measurement data. (orig.)

  4. 3D pressure imaging of an aircraft propeller blade-tip flow by phase-locked stereoscopic PIV

    Science.gov (United States)

    Ragni, D.; van Oudheusden, B. W.; Scarano, F.

    2012-02-01

    The flow field at the tip region of a scaled DHC Beaver aircraft propeller, running at transonic speed, has been investigated by means of a multi-plane stereoscopic particle image velocimetry setup. Velocity fields, phase-locked with the blade rotational motion, are acquired across several planes perpendicular to the blade axis and merged to form a 3D measurement volume. Transonic conditions have been reached at the tip region, with a revolution frequency of 19,800 rpm and a relative free-stream Mach number of 0.73 at the tip. The pressure field and the surface pressure distribution are inferred from the 3D velocity data through integration of the momentum Navier-Stokes equation in differential form, allowing for the simultaneous flow visualization and the aerodynamic loads computation, with respect to a reference frame moving with the blade. The momentum and pressure data are further integrated by means of a contour-approach to yield the aerodynamic sectional force components as well as the blade torsional moment. A steady Reynolds averaged Navier-Stokes numerical simulation of the entire propeller model has been used for comparison to the measurement data.

  5. Augmented reality to the rescue of the minimally invasive surgeon. The usefulness of the interposition of stereoscopic images in the Da Vinci™ robotic console.

    Science.gov (United States)

    Volonté, Francesco; Buchs, Nicolas C; Pugin, François; Spaltenstein, Joël; Schiltz, Boris; Jung, Minoa; Hagen, Monika; Ratib, Osman; Morel, Philippe

    2013-09-01

    Computerized management of medical information and 3D imaging has become the norm in everyday medical practice. Surgeons exploit these emerging technologies and bring information previously confined to the radiology rooms into the operating theatre. The paper reports the authors' experience with integrated stereoscopic 3D-rendered images in the da Vinci surgeon console. Volume-rendered images were obtained from a standard computed tomography dataset using the OsiriX DICOM workstation. A custom OsiriX plugin was created that permitted the 3D-rendered images to be displayed in the da Vinci surgeon console and to appear stereoscopic. These rendered images were displayed in the robotic console using the TilePro multi-input display. The upper part of the screen shows the real endoscopic surgical field and the bottom shows the stereoscopic 3D-rendered images. These are controlled by a 3D joystick installed on the console, and are updated in real time. Five patients underwent a robotic augmented reality-enhanced procedure. The surgeon was able to switch between the classical endoscopic view and a combined virtual view during the procedure. Subjectively, the addition of the rendered images was considered to be an undeniable help during the dissection phase. With the rapid evolution of robotics, computer-aided surgery is receiving increasing interest. This paper details the authors' experience with 3D-rendered images projected inside the surgical console. The use of this intra-operative mixed reality technology is considered very useful by the surgeon. It has been shown that the usefulness of this technique is a step toward computer-aided surgery that will progress very quickly over the next few years. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Interlopers 3D: experiences designing a stereoscopic game

    Science.gov (United States)

    Weaver, James; Holliman, Nicolas S.

    2014-03-01

    Background In recent years 3D-enabled televisions, VR headsets and computer displays have become more readily available in the home. This presents an opportunity for game designers to explore new stereoscopic game mechanics and techniques that have previously been unavailable in monocular gaming. Aims To investigate the visual cues that are present in binocular and monocular vision, identifying which are relevant when gaming using a stereoscopic display. To implement a game whose mechanics are so reliant on binocular cues that the game becomes impossible or at least very difficult to play in non-stereoscopic mode. Method A stereoscopic 3D game was developed whose objective was to shoot down advancing enemies (the Interlopers) before they reached their destination. Scoring highly required players to make accurate depth judgments and target the closest enemies first. A group of twenty participants played both a basic and advanced version of the game in both monoscopic 2D and stereoscopic 3D. Results The results show that in both the basic and advanced game participants achieved higher scores when playing in stereoscopic 3D. The advanced game showed that by disrupting the depth from motion cue the game became more difficult in monoscopic 2D. Results also show a certain amount of learning taking place over the course of the experiment, meaning that players were able to score higher and finish the game faster over the course of the experiment. Conclusions Although the game was not impossible to play in monoscopic 2D, participants results show that it put them at a significant disadvantage when compared to playing in stereoscopic 3D.

  7. Costless Platform for High Resolution Stereoscopic Images of a High Gothic Facade

    Science.gov (United States)

    Héno, R.; Chandelier, L.; Schelstraete, D.

    2012-07-01

    In October 2011, the PPMD specialized master's degree students (Photogrammetry, Positionning and Deformation Measurement) of the French ENSG (IGN's School of Geomatics, the Ecole Nationale des Sciences Géographiques) were asked to come and survey the main facade of the cathedral of Amiens, which is very complex as far as size and decoration are concerned. Although it was first planned to use a lift truck for the image survey, budget considerations and taste for experimentation led the project to other perspectives: images shot from the ground level with a long focal camera will be combined to complementary images shot from what higher galleries are available on the main facade with a wide angle camera fixed on a horizontal 2.5 meter long pole. This heteroclite image survey is being processed by the PPMD master's degree students during this academic year. Among other type of products, 3D point clouds will be calculated on specific parts of the facade with both sources of images. If the proposed device and methodology to get full image coverage of the main facade happen to be fruitful, the image acquisition phase will be completed later by another team. This article focuses on the production of 3D point clouds with wide angle images on the rose of the main facade.

  8. COSTLESS PLATFORM FOR HIGH RESOLUTION STEREOSCOPIC IMAGES OF A HIGH GOTHIC FACADE

    Directory of Open Access Journals (Sweden)

    R. Héno

    2012-07-01

    Full Text Available In October 2011, the PPMD specialized master's degree students (Photogrammetry, Positionning and Deformation Measurement of the French ENSG (IGN’s School of Geomatics, the Ecole Nationale des Sciences Géographiques were asked to come and survey the main facade of the cathedral of Amiens, which is very complex as far as size and decoration are concerned. Although it was first planned to use a lift truck for the image survey, budget considerations and taste for experimentation led the project to other perspectives: images shot from the ground level with a long focal camera will be combined to complementary images shot from what higher galleries are available on the main facade with a wide angle camera fixed on a horizontal 2.5 meter long pole. This heteroclite image survey is being processed by the PPMD master's degree students during this academic year. Among other type of products, 3D point clouds will be calculated on specific parts of the facade with both sources of images. If the proposed device and methodology to get full image coverage of the main facade happen to be fruitful, the image acquisition phase will be completed later by another team. This article focuses on the production of 3D point clouds with wide angle images on the rose of the main facade.

  9. A METHOD FOR RECORDING AND VIEWING STEREOSCOPIC IMAGES IN COLOUR USING MULTICHROME FILTERS

    DEFF Research Database (Denmark)

    2000-01-01

    in a conventional stereogram recorded of the scene. The invention makes use of a colour-based encoding technique and viewing filters selected so that the human observer receives, in one eye, an image of nearly full colour information, in the other eye, an essentially monochrome image supplying the parallactic......The aim of the invention is to create techniques for the encoding, production and viewing of stereograms, supplemented by methods for selecting certain optical filters needed in these novel techniques, thus providing a human observer with stereograms each of which consist of a single image...

  10. A simple device for the stereoscopic display of 3D CT images

    International Nuclear Information System (INIS)

    Haveri, M.; Suramo, I.; Laehde, S.; Karhula, V.; Junila, J.

    1997-01-01

    We describe a simple device for creating true 3D views of image pairs obtained at 3D CT reconstruction. The device presents the images in a slightly different angle of view for the left and the right eyes. This true 3D viewing technique was applied experimentally in the evaluation of complex acetabular fractures. Experiments were also made to determine the optimal angle between the images for each eye. The angle varied between 1 and 7 for different observers and also depended on the display field of view used. (orig.)

  11. The Advanced Gamma-ray Imaging System (AGIS): A Nanosecond Time Scale Stereoscopic Array Trigger System.

    Science.gov (United States)

    Krennrich, Frank; Buckley, J.; Byrum, K.; Dawson, J.; Drake, G.; Horan, D.; Krawzcynski, H.; Schroedter, M.

    2008-04-01

    Imaging atmospheric Cherenkov telescope arrays (VERITAS, HESS) have shown unprecedented background suppression capabilities for reducing cosmic-ray induced air showers, muons and night sky background fluctuations. Next-generation arrays with on the order of 100 telescopes offer larger collection areas, provide the possibility to see the air shower from more view points on the ground, have the potential to improve the sensitivity and give additional background suppression. Here we discuss the design of a fast array trigger system that has the potential to perform a real time image analysis allowing substantially improved background rate suppression at the trigger level.

  12. Visual Suppression of Monocularly Presented Symbology Against a Fused Background in a Simulation and Training Environment

    National Research Council Canada - National Science Library

    Winterbottom, Marc D; Patterson, Robert; Pierce, Byron J; Taylor, Amanda

    2006-01-01

    .... This may create interocular differences in image characteristics that could disrupt binocular vision by provoking visual suppression, thus reducing visibility of the background scene, monocular symbology...

  13. Monocular Elevation Deficiency - Double Elevator Palsy

    Science.gov (United States)

    ... Español Condiciones Chinese Conditions Monocular Elevation Deficiency/ Double Elevator Palsy En Español Read in Chinese What is monocular elevation deficiency (Double Elevator Palsy)? Monocular Elevation Deficiency, also known by the ...

  14. Accuracy of cranial coplanar beam therapy using an oblique, stereoscopic x-ray image guidance system

    International Nuclear Information System (INIS)

    Vinci, Justin P.; Hogstrom, Kenneth R.; Neck, Daniel W.

    2008-01-01

    A system for measuring two-dimensional (2D) dose distributions in orthogonal anatomical planes in the cranium was developed and used to evaluate the accuracy of coplanar conformal therapy using ExacTrac image guidance. Dose distributions were measured in the axial, sagittal, and coronal planes using a CIRS (Computerized Imaging Reference Systems, Inc.) anthropomorphic head phantom with a custom internal film cassette. Sections of radiographic Kodak EDR2 film were cut, processed, and digitized using custom templates. Spatial and dosimetric accuracy and precision of the film system were assessed. BrainScan planned a coplanar-beam treatment to conformally irradiate a 2-cm-diameterx2-cm-long cylindrical planning target volume. Prior to delivery, phantom misalignments were imposed in combinations of ±8 mm offsets in each of the principal directions. ExacTrac x-ray correction was applied until the phantom was within an acceptance criteria of 1 mm/1 deg. (first two measurement sets) or 0.4 mm/0.4 deg. (last two measurement sets). Measured dose distributions from film were registered to the treatment plan dose calculations and compared. Alignment errors, displacement between midpoints of planned and measured 70% isodose contours (Δc), and positional errors of the 80% isodose line were evaluated using 49 2D film measurements (98 profiles). Comparison of common, but independent measurements of Δc showed that systematic errors in the measurement technique were 0.2 mm or less along all three anatomical axes and that random error averaged (σ±σ σ ) 0.29±0.06 mm for the acceptance criteria of 1 mm/1 deg. and 0.15±0.02 mm for the acceptance criteria of 0.4 mm/0.4 deg. . The latter was consistent with independent estimates that showed the precision of the measurement system was 0.3 mm (2σ). Values of Δc were as great as 0.9, 0.3, and 1.0 mm along the P-A, R-L, and I-S axes, respectively. Variations in Δc along the P-A axis were correlated to misalignments between laser

  15. Stereoscopic optical viewing system

    Science.gov (United States)

    Tallman, C.S.

    1986-05-02

    An improved optical system which provides the operator with a stereoscopic viewing field and depth of vision, particularly suitable for use in various machines such as electron or laser beam welding and drilling machines. The system features two separate but independently controlled optical viewing assemblies from the eyepiece to a spot directly above the working surface. Each optical assembly comprises a combination of eye pieces, turning prisms, telephoto lenses for providing magnification, achromatic imaging relay lenses and final stage pentagonal turning prisms. Adjustment for variations in distance from the turning prisms to the workpiece, necessitated by varying part sizes and configurations and by the operator's visual accuity, is provided separately for each optical assembly by means of separate manual controls at the operator console or within easy reach of the operator.

  16. Stereoscopic methods in TEM

    International Nuclear Information System (INIS)

    Thomas, L.E.

    1975-07-01

    Stereoscopic methods used in TEM are reviewed. The use of stereoscopy to characterize three-dimensional structures observed by TEM has become widespread since the introduction of instruments operating at 1 MV. In its emphasis on whole structures and thick specimens this approach differs significantly from conventional methods of microstructural analysis based on three-dimensional image reconstruction from a number of thin-section views. The great advantage of stereo derives from the ability to directly perceive and measure structures in three-dimensions by capitalizing on the unsurpassed human ability for stereoscopic matching of corresponding details on picture pairs showing the same features from different viewpoints. At this time, stereo methods are aimed mainly at structural understanding at the level of dislocations, precipitates, and irradiation-induced point-defect clusters in crystal and on the cellular irradiation-induced point-defect clusters in crystal and on the cellular level of biological specimens. 3-d reconstruction methods have concentrated on the molecular level where image resolution requirements dictate the use of very thin specimens. One recent application of three-dimensional coordinate measurements is a system developed for analyzing depth variations in the numbers, sizes and total volumes of voids produced near the surfaces of metal specimens during energetic ion bombardment. This system was used to correlate the void volumes at each depth along the ion range with the number of atomic displacements produced at that depth, thereby unfolding the entire swelling versus dose relationship from a single stereo view. A later version of this system incorporating computer-controlled stereo display capabilities is now being built

  17. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    Directory of Open Access Journals (Sweden)

    Taekjun Oh

    2015-07-01

    Full Text Available Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

  18. Stereoscopic augmented reality for laparoscopic surgery.

    Science.gov (United States)

    Kang, Xin; Azizian, Mahdi; Wilson, Emmanuel; Wu, Kyle; Martin, Aaron D; Kane, Timothy D; Peters, Craig A; Cleary, Kevin; Shekhar, Raj

    2014-07-01

    Conventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field. The stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine. The preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy. We presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and

  19. Effects of Intraluminal Thrombus on Patient-Specific Abdominal Aortic Aneurysm Hemodynamics via Stereoscopic Particle Image Velocity and Computational Fluid Dynamics Modeling

    Science.gov (United States)

    Chen, Chia-Yuan; Antón, Raúl; Hung, Ming-yang; Menon, Prahlad; Finol, Ender A.; Pekkan, Kerem

    2014-01-01

    The pathology of the human abdominal aortic aneurysm (AAA) and its relationship to the later complication of intraluminal thrombus (ILT) formation remains unclear. The hemodynamics in the diseased abdominal aorta are hypothesized to be a key contributor to the formation and growth of ILT. The objective of this investigation is to establish a reliable 3D flow visualization method with corresponding validation tests with high confidence in order to provide insight into the basic hemodynamic features for a better understanding of hemodynamics in AAA pathology and seek potential treatment for AAA diseases. A stereoscopic particle image velocity (PIV) experiment was conducted using transparent patient-specific experimental AAA models (with and without ILT) at three axial planes. Results show that before ILT formation, a 3D vortex was generated in the AAA phantom. This geometry-related vortex was not observed after the formation of ILT, indicating its possible role in the subsequent appearance of ILT in this patient. It may indicate that a longer residence time of recirculated blood flow in the aortic lumen due to this vortex caused sufficient shear-induced platelet activation to develop ILT and maintain uniform flow conditions. Additionally, two computational fluid dynamics (CFD) modeling codes (Fluent and an in-house cardiovascular CFD code) were compared with the two-dimensional, three-component velocity stereoscopic PIV data. Results showed that correlation coefficients of the out-of-plane velocity data between PIV and both CFD methods are greater than 0.85, demonstrating good quantitative agreement. The stereoscopic PIV study can be utilized as test case templates for ongoing efforts in cardiovascular CFD solver development. Likewise, it is envisaged that the patient-specific data may provide a benchmark for further studying hemodynamics of actual AAA, ILT, and their convolution effects under physiological conditions for clinical applications. PMID:24316984

  20. Monocular Vision SLAM for Indoor Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Koray Çelik

    2013-01-01

    Full Text Available This paper presents a novel indoor navigation and ranging strategy via monocular camera. By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM. The navigation strategy assumes an indoor or indoor-like manmade environment whose layout is previously unknown, GPS-denied, representable via energy based feature points, and straight architectural lines. We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV with sophisticated on-board image processing and SLAM capabilities. Building and enabling such a small aerial vehicle to fly in tight corridors is a significant technological challenge, especially in the absence of GPS signals and with limited sensing options. Experimental results show that the system is only limited by the capabilities of the camera and environmental entropy.

  1. Manifolds for pose tracking from monocular video

    Science.gov (United States)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2015-03-01

    We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).

  2. Quantification and recognition of parkinsonian gait from monocular video imaging using kernel-based principal component analysis

    Directory of Open Access Journals (Sweden)

    Chen Shih-Wei

    2011-11-01

    Full Text Available Abstract Background The computer-aided identification of specific gait patterns is an important issue in the assessment of Parkinson's disease (PD. In this study, a computer vision-based gait analysis approach is developed to assist the clinical assessments of PD with kernel-based principal component analysis (KPCA. Method Twelve PD patients and twelve healthy adults with no neurological history or motor disorders within the past six months were recruited and separated according to their "Non-PD", "Drug-On", and "Drug-Off" states. The participants were asked to wear light-colored clothing and perform three walking trials through a corridor decorated with a navy curtain at their natural pace. The participants' gait performance during the steady-state walking period was captured by a digital camera for gait analysis. The collected walking image frames were then transformed into binary silhouettes for noise reduction and compression. Using the developed KPCA-based method, the features within the binary silhouettes can be extracted to quantitatively determine the gait cycle time, stride length, walking velocity, and cadence. Results and Discussion The KPCA-based method uses a feature-extraction approach, which was verified to be more effective than traditional image area and principal component analysis (PCA approaches in classifying "Non-PD" controls and "Drug-Off/On" PD patients. Encouragingly, this method has a high accuracy rate, 80.51%, for recognizing different gaits. Quantitative gait parameters are obtained, and the power spectrums of the patients' gaits are analyzed. We show that that the slow and irregular actions of PD patients during walking tend to transfer some of the power from the main lobe frequency to a lower frequency band. Our results indicate the feasibility of using gait performance to evaluate the motor function of patients with PD. Conclusion This KPCA-based method requires only a digital camera and a decorated corridor setup

  3. Evaluating methods for controlling depth perception in stereoscopic cinematography

    Science.gov (United States)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography

  4. Alternation Frequency Thresholds for Stereopsis as a Technique for Exploring Stereoscopic Difficulties

    Directory of Open Access Journals (Sweden)

    Svetlana Rychkova

    2011-01-01

    Full Text Available When stereoscopic images are presented alternately to the two eyes, stereopsis occurs at F ⩾ 1 Hz full-cycle frequencies for very simple stimuli, and F ⩾ 3 Hz full-cycle frequencies for random-dot stereograms (eg Ludwig I, Pieper W, Lachnit H, 2007 “Temporal integration of monocular images separated in time: stereopsis, stereoacuity, and binocular luster” Perception & Psychophysics 69 92–102. Using twenty different stereograms presented through liquid crystal shutters, we studied the transition to stereopsis with fifteen subjects. The onset of stereopsis was observed during a stepwise increase of the alternation frequency, and its disappearance was observed during a stepwise decrease in frequency. The lowest F values (around 2.5 Hz were observed with stimuli involving two to four simple disjoint elements (circles, arcs, rectangles. Higher F values were needed for stimuli containing slanted elements or curved surfaces (about 1 Hz increment, overlapping elements at two different depths (about 2.5 Hz increment, or camouflaged overlapping surfaces (> 7 Hz increment. A textured cylindrical surface with a horizontal axis appeared easier to interpret (5.7 Hz than a pair of slanted segments separated in depth but forming a cross in projection (8 Hz. Training effects were minimal, and F usually increased as disparities were reduced. The hierarchy of difficulties revealed in the study may shed light on various problems that the brain needs to solve during stereoscopic interpretation. During the construction of the three-dimensional percept, the loss of information due to natural decay of the stimuli traces must be compensated by refreshes of visual input. In the discussion an attempt is made to link our results with recent advances in the comprehension of visual scene memory.

  5. Quality assurance of a system for improved target localization and patient set-up that combines real-time infrared tracking and stereoscopic X-ray imaging.

    Science.gov (United States)

    Verellen, Dirk; Soete, Guy; Linthout, Nadine; Van Acker, Swana; De Roover, Patsy; Vinh-Hung, Vincent; Van de Steene, Jan; Storme, Guy

    2003-04-01

    The aim of this study is to investigate the positional accuracy of a prototype X-ray imaging tool in combination with a real-time infrared tracking device allowing automated patient set-up in three dimensions. A prototype X-ray imaging tool has been integrated with a commercially released real-time infrared tracking device. The system, consisting of two X-ray tubes mounted to the ceiling and a centrally located amorphous silicon detector has been developed for automated patient positioning from outside the treatment room prior to treatment. Two major functions are supported: (a) automated fusion of the actual treatment images with digitally reconstructed radiographs (DRRs) representing the desired position; (b) matching of implanted radio opaque markers. Measurements of known translational (up to 30.0mm) and rotational (up to 4.0 degrees ) set-up errors in three dimensions as well as hidden target tests have been performed on anthropomorphic phantoms. The system's accuracy can be represented with the mean three-dimensional displacement vector, which yielded 0.6mm (with an overall SD of 0.9mm) for the fusion of DRRs and X-ray images. Average deviations between known translational errors and calculations varied from -0.3 to 0.6mm with a standard deviation in the range of 0.6-1.2mm. The marker matching algorithm yielded a three-dimensional uncertainty of 0.3mm (overall SD: 0.4mm), with averages ranging from 0.0 to 0.3mm and a standard deviation in the range between 0.3 and 0.4mm. The stereoscopic X-ray imaging device integrated with the real-time infrared tracking device represents a positioning tool allowing for the geometrical accuracy that is required for conformal radiation therapy of abdominal and pelvic lesions, within an acceptable time-frame.

  6. Retinopathy screening in patients with type 1 diabetes diagnosed in young age using a non-mydriatic digital stereoscopic retinal imaging.

    Science.gov (United States)

    Minuto, N; Emmanuele, V; Vannati, M; Russo, C; Rebora, C; Panarello, S; Pistorio, A; Lorini, R; d'Annunzio, G

    2012-04-01

    Diabetic retinopathy seriously impairs patients' quality of life, since it represents the first cause of blindness in industrialized countries. To estimate prevalence of retinopathy in young Type 1 diabetes patients using a non-mydriatic digital stereoscopic retinal imaging (NMDSRI), and to evaluate the impact of socio-demographic, clinical, and metabolic variables. In 247 young patients glycated hemoglobin (HbA1c), gender, age, pubertal stage, presence of diabetic ketoacidosis (DKA), HLA-DQ heterodimers of susceptibility for Type 1 diabetes, and β-cell autoimmunity at clinical onset were considered. At retinopathy screening, we evaluated age, disease duration, pubertal stage, body mass index (BMI-SDS), insulin requirement, HbA1c levels, other autoimmune diseases, diabetes-related complications, serum concentrations of cholesterol and triglycerides, systolic and diastolic blood pressure. Retinopathy was found in 26/247 patients: 25 showed background retinopathy, and 1 had a sight-threatening retinopathy. A significant relationship between retinopathy and female gender (p=0.01), duration of disease ≥15 yr (p65 mg/dl (p=0.012) and mean HbA1c ≥7.5% or >9% (p=0.0014) were found at the multivariate logistic analysis. Metabolic control is the most important modifiable factor and promotion of continuous educational process to reach a good metabolic control is a cornerstone to prevent microangiopathic complications. Symptoms appear when the complication is already established; a screening program with an early diagnosis is mandatory to prevent an irreversible damage.

  7. Clinical usefulness of stereoscopic DSA

    International Nuclear Information System (INIS)

    Bussaka, Hiromasa; Takahashi, Mutsumasa; Miyawaki, Masayuki; Korogi, Yukinori; Yamashita, Yasuyuki; Izunaga, Hiroshi; Nakashima, Koki; Yoshizumi, Kazuhiro

    1988-01-01

    Digital subtraction angiography (DSA) is widely used as a screening examination for vascular diseases, but it has several disadvantages, one of which is overlapping of the vessels. To overcome this disadvantage, stereoscopic technique is applied to our DSA equipment. Stereoscopic DSA is obtained by alternate exposures from twin focal spots of an x-ray tube without additional contrast medium or radiation exposures. Stereoscopic intravenous DSA was performed 223 times, and was useful in 157 times (70.4 %) for the identification and stereoscopic observation of the abdominal and pelvic vessels. Thirty-seven intra-arterial DSAs were performed stereoscopically for cranial, abdominal and pelvic angiograms, and effective studies were obtained in 30 DSAs (81.1 %) with demonstration of tumor stains and displacement of the vessels. It is necessary to use adequate compensation filters for the good stereoscopic DSAs, especially for the cervical and thoracic DSAs. (author)

  8. Stereoscopic 3D graphics generation

    Science.gov (United States)

    Li, Zhi; Liu, Jianping; Zan, Y.

    1997-05-01

    Stereoscopic display technology is one of the key techniques of areas such as simulation, multimedia, entertainment, virtual reality, and so on. Moreover, stereoscopic 3D graphics generation is an important part of stereoscopic 3D display system. In this paper, at first, we describe the principle of stereoscopic display and summarize some methods to generate stereoscopic 3D graphics. Secondly, to overcome the problems which came from the methods of user defined models (such as inconvenience, long modifying period and so on), we put forward the vector graphics files defined method. Thus we can design more directly; modify the model simply and easily; generate more conveniently; furthermore, we can make full use of graphics accelerator card and so on. Finally, we discuss the problem of how to speed up the generation.

  9. Measuring system with stereoscopic x-ray television for accurate diagnosis

    International Nuclear Information System (INIS)

    Iwasaki, K.; Shimizu, S.

    1987-01-01

    X-ray stereoscopic television is diagnostically effective. The authors invented a measuring system using stereoscopic television whereby the coordinates of any two points and their separation can be measured in real time without physical contact. For this purpose, the distances between the two foci of the tube and between the tube and image intensifier were entered into a microcomputer beforehand, and any two points on the CRT stereoscopic image can be defined through the stereoscopic spectacles. The coordinates and distance are then displayed on the CRT monitor. By this means, measurements such as distance between vessels and size of organs are easily made

  10. The development and evaluation of a stereoscopic television system for use in nuclear environments

    International Nuclear Information System (INIS)

    Dumbreck, A.A.; Murphy, S.P.

    1987-01-01

    This paper describes the development and evaluation of a stereoscopic TV system at Harwell Laboratory. The theory of stereo image geometry is outlined, and criteria for the matching of stereoscopic pictures are given. A stereoscopic TV system designed for remote handling tasks has been produced, it provides two selectable angles of view and variable convergence, the display is viewed via polarizing spectacles. Preliminary evaluations have indicated improved performance with no problems of operator fatigue

  11. The development and evaluation of a stereoscopic television system for remote handling

    International Nuclear Information System (INIS)

    Dumbreck, A.A.; Murphy, S.P.; Smith, C.W.

    1990-01-01

    This paper describes the development and evaluation of a stereoscopic television system at Harwell Laboratory. The theory of stereo image geometry is outlined, and criteria for the matching of stereoscopic pictures are given. A stereoscopic television system designed for remote handling tasks has been produced, it provides two selectable angles of view and variable convergence, the display is viewed via polarizing spectacles. Evaluations have indicated improved performance with no problems of operator fatigue over a wide range of applications. (author)

  12. Usability of stereoscopic view in teleoperation

    Science.gov (United States)

    Boonsuk, Wutthigrai

    2015-03-01

    Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.

  13. The Role of Amodal Surface Completion in Stereoscopic Transparency

    Science.gov (United States)

    Anderson, Barton L.; Schmid, Alexandra C.

    2012-01-01

    Previous work has shown that the visual system can decompose stereoscopic textures into percepts of inhomogeneous transparency. We investigate whether this form of layered image decomposition is shaped by constraints on amodal surface completion. We report a series of experiments that demonstrate that stereoscopic depth differences are easier to discriminate when the stereo images generate a coherent percept of surface color, than when images require amodally integrating a series of color changes into a coherent surface. Our results provide further evidence for the intimate link between the segmentation processes that occur in conditions of transparency and occlusion, and the interpolation processes involved in the formation of amodally completed surfaces. PMID:23060829

  14. Is eye damage caused by stereoscopic displays?

    Science.gov (United States)

    Mayer, Udo; Neumann, Markus D.; Kubbat, Wolfgang; Landau, Kurt

    2000-05-01

    A normal developing child will achieve emmetropia in youth and maintain it. Thereby cornea, lens and axial length of the eye grow astonishingly coordinated. In the last years research has evidenced that this coordinated growing process is a visually controlled closed loop. The mechanism has been studied particularly in animals. It was found that the growth of the axial length of the eyeball is controlled by image focus information from the retina. It was shown that maladjustment can occur by this visually-guided growth control mechanism that result in ametropia. Thereby it has been proven that e.g. short-sightedness is not only caused by heredity, but is acquired under certain visual conditions. It is shown that these conditions are similar to the conditions of viewing stereoscopic displays where the normal accommodation convergence coupling is disjoint. An evaluation is given of the potential of damaging the eyes by viewing stereoscopic displays. Concerning this, different viewing methods for stereoscopic displays are evaluated. Moreover, clues are given how the environment and display conditions shall be set and what users shall be chosen to minimize the risk of eye damages.

  15. Digital stereoscopic cinema: the 21st century

    Science.gov (United States)

    Lipton, Lenny

    2008-02-01

    Over 1000 theaters in more than a dozen countries have been outfitted with digital projectors using the Texas Instruments DLP engine equipped to show field-sequential 3-D movies using the polarized method of image selection. Shuttering eyewear and advanced anaglyph products are also being deployed for image selection. Many studios are in production with stereoscopic films, and some have committed to producing their entire output of animated features in 3-D. This is a time of technology change for the motion picture industry.

  16. [Dendrobium officinale stereoscopic cultivation method].

    Science.gov (United States)

    Si, Jin-Ping; Dong, Hong-Xiu; Liao, Xin-Yan; Zhu, Yu-Qiu; Li, Hui

    2014-12-01

    The study is aimed to make the most of available space of Dendrobium officinale cultivation facility, reveal the yield and functional components variation of stereoscopic cultivated D. officinale, and improve quality, yield and efficiency. The agronomic traits and yield variation of stereoscopic cultivated D. officinale were studied by operating field experiment. The content of polysaccharide and extractum were determined by using phenol-sulfuric acid method and 2010 edition of "Chinese Pharmacopoeia" Appendix X A. The results showed that the land utilization of stereoscopic cultivated D. officinale increased 2.74 times, the stems, leaves and their total fresh or dry weight in unit area of stereoscopic cultivated D. officinale were all heavier than those of the ground cultivated ones. There was no significant difference in polysaccharide content between stereoscopic cultivation and ground cultivation. But the extractum content and total content of polysaccharide and extractum were significantly higher than those of the ground cultivated ones. In additional, the polysaccharide content and total content of polysaccharide and extractum from the top two levels of stereoscopic culture matrix were significantly higher than that of the ones from the other levels and ground cultivation. Steroscopic cultivation can effectively improves the utilization of space and yield, while the total content of polysaccharides and extractum were significantly higher than that of the ground cultivated ones. The significant difference in Dendrobium polysaccharides among the plants from different height of stereo- scopic culture matrix may be associated with light factor.

  17. Six dimensional analysis with daily stereoscopic x-ray imaging of intrafraction patient motion in head and neck treatments using five points fixation masks

    International Nuclear Information System (INIS)

    Linthout, Nadine; Verellen, Dirk; Tournel, Koen; Storme, Guy

    2006-01-01

    The safety margins used to define the Planning Target Volume (PTV) should reflect the accuracy of the target localization during treatment that comprises both the reproducibility of the patient positioning and the positional uncertainty of the target, so both the inter- and intrafraction motion of the target. Our first aim in this study was to determine the intrafraction motion of patients immobilized with a five-point thermoplastic mask for head and neck treatments. The five-point masks have the advantage that the patient's shoulders as well as the cranial part of the patient's head is covered with the thermoplastic material that improves the overall immobilization of the head and neck region of the patient. Thirteen patients were consecutively assigned to use a five-point thermoplastic mask. The patients were positioned by tracking of infrared markers (IR) fixed to the immobilization device and stereoscopic x-ray images were used for daily on-line setup verification. Repositioning was carried out prior to treatment as needed; rotations were not corrected. Movements during treatment were monitored by real-time IR tracking. Intrafraction motion and rotation was supplementary assessed by a six-degree-of-freedom (6-D) fusion of x-ray images, taken before and after all 385 treatments, with DRR images generated from the planning CT data. The latter evaluates the movement of the patient within the thermoplastic mask independent from the mask movement, where IR tracking evaluates the movement of the mask caused by patient movement in the mask. These two movements are not necessarily equal to each other. The maximum intrafraction movement detected by IR tracking showed a shift [mean (SD; range)] of -0.1(0.7; 6.0), 0.1(0.6; 3.6), -0.2(0.8;5.5) mm in the vertical, longitudinal, and lateral direction, respectively, and rotations of 0.0(0.2; 1.6), 0.0(0.2; 1.7) and 0.2(0.2; 2.4) degrees about the vertical, longitudinal, and lateral axis, respectively. The standard deviations

  18. No-Reference Stereoscopic IQA Approach: From Nonlinear Effect to Parallax Compensation

    Directory of Open Access Journals (Sweden)

    Ke Gu

    2012-01-01

    Full Text Available The last decade has seen a booming of the applications of stereoscopic images/videos and the corresponding technologies, such as 3D modeling, reconstruction, and disparity estimation. However, only a very limited number of stereoscopic image quality assessment metrics was proposed through the years. In this paper, we propose a new no-reference stereoscopic image quality assessment algorithm based on the nonlinear additive model, ocular dominance model, and saliency based parallax compensation. Our studies using the Toyama database result in three valuable findings. First, quality of the stereoscopic image has a nonlinear relationship with a direct summation of two monoscopic image qualities. Second, it is a rational assumption that the right-eye response has the higher impact on the stereoscopic image quality, which is based on a sampling survey in the ocular dominance research. Third, the saliency based parallax compensation, resulted from different stereoscopic image contents, is considerably valid to improve the prediction performance of image quality metrics. Experimental results confirm that our proposed stereoscopic image quality assessment paradigm has superior prediction accuracy as compared to state-of-the-art competitors.

  19. Stereoscopically Observing Manipulative Actions.

    Science.gov (United States)

    Ferri, S; Pauwels, K; Rizzolatti, G; Orban, G A

    2016-08-01

    The purpose of this study was to investigate the contribution of stereopsis to the processing of observed manipulative actions. To this end, we first combined the factors "stimulus type" (action, static control, and dynamic control), "stereopsis" (present, absent) and "viewpoint" (frontal, lateral) into a single design. Four sites in premotor, retro-insular (2) and parietal cortex operated specifically when actions were viewed stereoscopically and frontally. A second experiment clarified that the stereo-action-specific regions were driven by actions moving out of the frontoparallel plane, an effect amplified by frontal viewing in premotor cortex. Analysis of single voxels and their discriminatory power showed that the representation of action in the stereo-action-specific areas was more accurate when stereopsis was active. Further analyses showed that the 4 stereo-action-specific sites form a closed network converging onto the premotor node, which connects to parietal and occipitotemporal regions outside the network. Several of the specific sites are known to process vestibular signals, suggesting that the network combines observed actions in peripersonal space with gravitational signals. These findings have wider implications for the function of premotor cortex and the role of stereopsis in human behavior. © The Author 2016. Published by Oxford University Press.

  20. Monocular Perceptual Deprivation from Interocular Suppression Temporarily Imbalances Ocular Dominance.

    Science.gov (United States)

    Kim, Hyun-Woong; Kim, Chai-Youn; Blake, Randolph

    2017-03-20

    Early visual experience sculpts neural mechanisms that regulate the balance of influence exerted by the two eyes on cortical mechanisms underlying binocular vision [1, 2], and experience's impact on this neural balancing act continues into adulthood [3-5]. One recently described, compelling example of adult neural plasticity is the effect of patching one eye for a relatively short period of time: contrary to intuition, monocular visual deprivation actually improves the deprived eye's competitive advantage during a subsequent period of binocular rivalry [6-8], the robust form of visual competition prompted by dissimilar stimulation of the two eyes [9, 10]. Neural concomitants of this improvement in monocular dominance are reflected in measurements of brain responsiveness following eye patching [11, 12]. Here we report that patching an eye is unnecessary for producing this paradoxical deprivation effect: interocular suppression of an ordinarily visible stimulus being viewed by one eye is sufficient to produce shifts in subsequent predominance of that eye to an extent comparable to that produced by patching the eye. Moreover, this imbalance in eye dominance can also be induced by prior, extended viewing of two monocular images differing only in contrast. Regardless of how shifts in eye dominance are induced, the effect decays once the two eyes view stimuli equal in strength. These novel findings implicate the operation of interocular neural gain control that dynamically adjusts the relative balance of activity between the two eyes [13, 14]. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. REAL TIME SPEED ESTIMATION FROM MONOCULAR VIDEO

    Directory of Open Access Journals (Sweden)

    M. S. Temiz

    2012-07-01

    Full Text Available In this paper, detailed studies have been performed for developing a real time system to be used for surveillance of the traffic flow by using monocular video cameras to find speeds of the vehicles for secure travelling are presented. We assume that the studied road segment is planar and straight, the camera is tilted downward a bridge and the length of one line segment in the image is known. In order to estimate the speed of a moving vehicle from a video camera, rectification of video images is performed to eliminate the perspective effects and then the interest region namely the ROI is determined for tracking the vehicles. Velocity vectors of a sufficient number of reference points are identified on the image of the vehicle from each video frame. For this purpose sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in the image space are transformed to the object space to find the absolute values of these magnitudes. The accuracy of the estimated speed is approximately ±1 – 2 km/h. In order to solve the real time speed estimation problem, the authors have written a software system in C++ programming language. This software system has been used for all of the computations and test applications.

  2. Efficient stereoscopic contents file format on the basis of ISO base media file format

    Science.gov (United States)

    Kim, Kyuheon; Lee, Jangwon; Suh, Doug Young; Park, Gwang Hoon

    2009-02-01

    A lot of 3D contents haven been widely used for multimedia services, however, real 3D video contents have been adopted for a limited applications such as a specially designed 3D cinema. This is because of the difficulty of capturing real 3D video contents and the limitation of display devices available in a market. However, diverse types of display devices for stereoscopic video contents for real 3D video contents have been recently released in a market. Especially, a mobile phone with a stereoscopic camera has been released in a market, which provides a user as a consumer to have more realistic experiences without glasses, and also, as a content creator to take stereoscopic images or record the stereoscopic video contents. However, a user can only store and display these acquired stereoscopic contents with his/her own devices due to the non-existence of a common file format for these contents. This limitation causes a user not share his/her contents with any other users, which makes it difficult the relevant market to stereoscopic contents is getting expanded. Therefore, this paper proposes the common file format on the basis of ISO base media file format for stereoscopic contents, which enables users to store and exchange pure stereoscopic contents. This technology is also currently under development for an international standard of MPEG as being called as a stereoscopic video application format.

  3. Flow Mapping of a Jet in Crossflow with Stereoscopic PIV

    DEFF Research Database (Denmark)

    Meyer, Knud Erik; Özcan, Oktay; Westergaard, C. H.

    2002-01-01

    Stereoscopic Particle Image Velocimetry (PIV) has been used to make a three-dimensional flow mapping of a jet in crossflow. The Reynolds number based on the free stream velocity and the jet diameter was nominally 2400. A jet-to-crossflow velocity ratio of 3.3 was used. Details of the formation...

  4. Image-Guided Localization Accuracy of Stereoscopic Planar and Volumetric Imaging Methods for Stereotactic Radiation Surgery and Stereotactic Body Radiation Therapy: A Phantom Study

    International Nuclear Information System (INIS)

    Kim, Jinkoo; Jin, Jian-Yue; Walls, Nicole; Nurushev, Teamour; Movsas, Benjamin; Chetty, Indrin J.; Ryu, Samuel

    2011-01-01

    Purpose: To evaluate the positioning accuracies of two image-guided localization systems, ExacTrac and On-Board Imager (OBI), in a stereotactic treatment unit. Methods and Materials: An anthropomorphic pelvis phantom with eight internal metal markers (BBs) was used. The center of one BB was set as plan isocenter. The phantom was set up on a treatment table with various initial setup errors. Then, the errors were corrected using each of the investigated systems. The residual errors were measured with respect to the radiation isocenter using orthogonal portal images with field size 3 x 3 cm 2 . The angular localization discrepancies of the two systems and the correction accuracy of the robotic couch were also studied. A pair of pre- and post-cone beam computed tomography (CBCT) images was acquired for each angular correction. Then, the correction errors were estimated by using the internal BBs through fiducial marker-based registrations. Results: The isocenter localization errors (μ ±σ) in the left/right, posterior/anterior, and superior/inferior directions were, respectively, -0.2 ± 0.2 mm, -0.8 ± 0.2 mm, and -0.8 ± 0.4 mm for ExacTrac, and 0.5 ± 0.7 mm, 0.6 ± 0.5 mm, and 0.0 ± 0.5 mm for OBI CBCT. The registration angular discrepancy was 0.1 ± 0.2 o between the two systems, and the maximum angle correction error of the robotic couch was 0.2 o about all axes. Conclusion: Both the ExacTrac and the OBI CBCT systems showed approximately 1 mm isocenter localization accuracies. The angular discrepancy of two systems was minimal, and the robotic couch angle correction was accurate. These positioning uncertainties should be taken as a lower bound because the results were based on a rigid dosimetry phantom.

  5. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, J.J.; Albertazzi, L.; Doorn, A.J. van; Ee, R. van; Grind, W.A. van de; Kappers, A.M.L.; Lappin, J.S.; Norman, J.F.; Oomes, A.H.J.; Pas, S.F. te; Phillips, F.; Pont, S.C.; Richards, W.A.; Todd, J.T.; Verstraten, F.A.J.; Vries, S.C. de

    2010-01-01

    The issue of the existence of planes—understood as the carriers of a nexus of straight lines—in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  6. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, Jan J.; Albertazzi, Liliana; van Doorn, Andrea J.; van Ee, Raymond; van de Grind, Wim A.; Kappers, Astrid M L; Lappin, Joe S.; Farley Norman, J.; (Stijn) Oomes, A. H J; te Pas, Susan P.; Phillips, Flip; Pont, Sylvia C.; Richards, Whitman A.; Todd, James T.; Verstraten, Frans A J; de Vries, Sjoerd

    The issue of the existence of planes-understood as the carriers of a nexus of straight lines-in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  7. Recovery of neurofilament following early monocular deprivation

    Directory of Open Access Journals (Sweden)

    Timothy P O'Leary

    2012-04-01

    Full Text Available A brief period of monocular deprivation in early postnatal life can alter the structure of neurons within deprived-eye-receiving layers of the dorsal lateral geniculate nucleus. The modification of structure is accompanied by a marked reduction in labeling for neurofilament, a protein that composes the stable cytoskeleton and that supports neuron structure. This study examined the extent of neurofilament recovery in monocularly deprived cats that either had their deprived eye opened (binocular recovery, or had the deprivation reversed to the fellow eye (reverse occlusion. The degree to which recovery was dependent on visually-driven activity was examined by placing monocularly deprived animals in complete darkness (dark rearing. The loss of neurofilament and the reduction of soma size caused by monocular deprivation were both ameliorated equally following either binocular recovery or reverse occlusion for 8 days. Though monocularly deprived animals placed in complete darkness showed recovery of soma size, there was a generalized loss of neurofilament labeling that extended to originally non-deprived layers. Overall, these results indicate that recovery of soma size is achieved by removal of the competitive disadvantage of the deprived eye, and occurred even in the absence of visually-driven activity. Recovery of neurofilament occurred when the competitive disadvantage of the deprived eye was removed, but unlike the recovery of soma size, was dependent upon visually-driven activity. The role of neurofilament in providing stable neural structure raises the intriguing possibility that dark rearing, which reduced overall neurofilament levels, could be used to reset the deprived visual system so as to make it more ameliorable with treatment by experiential manipulations.

  8. Monocular deprivation of Fourier phase information boosts the deprived eye's dominance during interocular competition but not interocular phase combination.

    Science.gov (United States)

    Bai, Jianying; Dong, Xue; He, Sheng; Bao, Min

    2017-06-03

    Ocular dominance has been extensively studied, often with the goal to understand neuroplasticity, which is a key characteristic within the critical period. Recent work on monocular deprivation, however, demonstrates residual neuroplasticity in the adult visual cortex. After deprivation of patterned inputs by monocular patching, the patched eye becomes more dominant. Since patching blocks both the Fourier amplitude and phase information of the input image, it remains unclear whether deprivation of the Fourier phase information alone is able to reshape eye dominance. Here, for the first time, we show that removing of the phase regularity without changing the amplitude spectra of the input image induced a shift of eye dominance toward the deprived eye, but only if the eye dominance was measured with a binocular rivalry task rather than an interocular phase combination task. These different results indicate that the two measurements are supported by different mechanisms. Phase integration requires the fusion of monocular images. The fused percept highly relies on the weights of the phase-sensitive monocular neurons that respond to the two monocular images. However, binocular rivalry reflects the result of direct interocular competition that strongly weights the contour information transmitted along each monocular pathway. Monocular phase deprivation may not change the weights in the integration (fusion) mechanism much, but alters the balance in the rivalry (competition) mechanism. Our work suggests that ocular dominance plasticity may occur at different stages of visual processing, and that homeostatic compensation also occurs for the lack of phase regularity in natural scenes. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. The rendering context for stereoscopic 3D web

    Science.gov (United States)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  10. Optimal display conditions for quantitative analysis of stereoscopic cerebral angiograms

    International Nuclear Information System (INIS)

    Charland, P.; Peters, T.; McGill Univ., Montreal, Quebec

    1996-01-01

    For several years the authors have been using a stereoscopic display as a tool in the planning of stereotactic neurosurgical techniques. This PC-based workstation allows the surgeon to interact with and view vascular images in three dimensions, as well as to perform quantitative analysis of the three-dimensional (3-D) space. Some of the perceptual issues relevant to the presentation of medical images on this stereoscopic display were addressed in five experiments. The authors show that a number of parameters--namely the shape, color, and depth cue, associated with a cursor--as well as the image filtering and observer position, have a role in improving the observer's perception of a 3-D image and his ability to localize points within the stereoscopically presented 3-D image. However, an analysis of the results indicates that while varying these parameters can lead to an effect on the performance of individual observers, the effects are not consistent across observers, and the mean accuracy remains relatively constant under the different experimental conditions

  11. On so-called paradoxical monocular stereoscopy.

    Science.gov (United States)

    Koenderink, J J; van Doorn, A J; Kappers, A M

    1994-01-01

    Human observers are apparently well able to judge properties of 'three-dimensional objects' on the basis of flat pictures such as photographs of physical objects. They obtain this 'pictorial relief' without much conscious effort and with little interference from the (flat) picture surface. Methods for 'magnifying' pictorial relief from single pictures include viewing instructions as well as a variety of monocular and binocular 'viewboxes'. Such devices are reputed to yield highly increased pictorial depth, though no methodologies for the objective verification of such claims exist. A binocular viewbox has been reconstructed and pictorial relief under monocular, 'synoptic', and natural binocular viewing is described. The results corroborate and go beyond early introspective reports and turn out to pose intriguing problems for modern research.

  12. Distributed Monocular SLAM for Indoor Map Building

    OpenAIRE

    Ruwan Egodagamage; Mihran Tuceryan

    2017-01-01

    Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM) is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps,...

  13. Quantitative evaluation of papilledema from stereoscopic color fundus photographs.

    Science.gov (United States)

    Tang, Li; Kardon, Randy H; Wang, Jui-Kai; Garvin, Mona K; Lee, Kyungmoo; Abràmoff, Michael D

    2012-07-03

    To derive a computerized measurement of optic disc volume from digital stereoscopic fundus photographs for the purpose of diagnosing and managing papilledema. Twenty-nine pairs of stereoscopic fundus photographs and optic nerve head (ONH) centered spectral domain optical coherence tomography (SD-OCT) scans were obtained at the same visit in 15 patients with papilledema. Some patients were imaged at multiple visits in order to assess their changes. Three-dimensional shape of the ONH was estimated from stereo fundus photographs using an automated multi-scale stereo correspondence algorithm. We assessed the correlation of the stereo volume measurements with the SD-OCT volume measurements quantitatively, in terms of volume of retinal surface elevation above a reference plane and also to expert grading of papilledema from digital fundus photographs using the Frisén grading scale. The volumetric measurements of retinal surface elevation estimated from stereo fundus photographs and OCT scans were positively correlated (correlation coefficient r(2) = 0.60; P photographs compares favorably with that from OCT scans and with expert grading of papilledema severity. Stereoscopic color imaging of the ONH combined with a method of automated shape reconstruction is a low-cost alternative to SD-OCT scans that has potential for a more cost-effective diagnosis and management of papilledema in a telemedical setting. An automated three-dimensional image analysis method was validated that quantifies the retinal surface topography with an imaging modality that has lacked prior objective assessment.

  14. Enhancement of stereoscopic comfort by fast control of frequency content with wavelet transform

    Science.gov (United States)

    Lemmer, Nicolas; Moreau, Guillaume; Fuchs, Philippe

    2003-05-01

    As the scope of virtual reality applications including stereoscopic imaging becomes wider, it is quite clear that not every designer of a VR application thinks of its constraints in order to make a correct use of stereo. Stereoscopic imagery though not required can be a useful tool for depth perception. It is possible to limit the depth of field as shown by Perrin who has also undertaken research on the link between the ability of fusing stereoscopic images (stereopsis) and local disparity and spatial frequency content. We will show how we can extend and enhance this work especially on the computational complexity point of view. The wavelet theory allows us to define a local spatial frequency and then a local measure of stereoscopic comfort. This measure is based on local spatial frequency and disparity as well as on the observations made by Woepking. Local comfort estimation allows us to propose several filtering methods to enhance this comfort. The idea to modify the images such as they check a "stereoscopic comfort condition" defined as a threshold for the stereoscopic comfort condition. More technically, we seek to limit high spatial frequency content when disparity is high thanks to the use of fast algorithms.

  15. Stereoscopic game design and evaluation

    Science.gov (United States)

    Rivett, Joe; Holliman, Nicolas

    2013-03-01

    We report on a new game design where the goal is to make the stereoscopic depth cue sufficiently critical to success that game play should become impossible without using a stereoscopic 3D (S3D) display and, at the same time, we investigate whether S3D game play is affected by screen size. Before we detail our new game design we review previously unreported results from our stereoscopic game research over the last ten years at the Durham Visualisation Laboratory. This demonstrates that game players can achieve significantly higher scores using S3D displays when depth judgements are an integral part of the game. Method: We design a game where almost all depth cues, apart from the binocular cue, are removed. The aim of the game is to steer a spaceship through a series of oncoming hoops where the viewpoint of the game player is from above, with the hoops moving right to left across the screen towards the spaceship, to play the game it is essential to make decisive depth judgments to steer the spaceship through each oncoming hoop. To confound these judgements we design altered depth cues, for example perspective is reduced as a cue by varying the hoop's depth, radius and cross-sectional size. Results: Players were screened for stereoscopic vision, given a short practice session, and then played the game in both 2D and S3D modes on a seventeen inch desktop display, on average participants achieved a more than three times higher score in S3D than they achieved in 2D. The same experiment was repeated using a four metre S3D projection screen and similar results were found. Conclusions: Our conclusion is that games that use the binocular depth cue in decisive game judgements can benefit significantly from using an S3D display. Based on both our current and previous results we additionally conclude that display size, from cell-phone, to desktop, to projection display does not adversely affect player performance.

  16. Clinical Assessment of a New Stereoscopic Digital Angiography System

    International Nuclear Information System (INIS)

    Moll, Thierry; Douek, Philippe; Finet, Gerard; Turjman, Francis; Picard, Catherine; Revel, Didier; Amiel, Michel

    1998-01-01

    Purpose: To assess the clinical feasibility of an experimental modified angiographic system capable of real-time digital stereofluoroscopy and stereography in X-ray angiography, using a twin-focus tube and a stereoscopic monitor. Methods: We report the experience obtained in 37 patients with a well-documented examination. The patients were examined for coronary angiography (11 cases), aortography (7 cases), pulmonary angiography (6 cases), inferior vena cava filter placement (2 cases), and cerebral angiography (11 cases). Six radiologists were asked to use stereoscopic features for fluoroscopy and angiography. A questionnaire was designed to record their subjective evaluation of stereoscopic image quality, ergonomics of the system, and its medical interest. Results: Stereofluoroscopy was successfully used in 25 of 37 cases; diplopia and/or ghost images were reported in 6 cases. It was helpful for aortic catheterization in 10 cases and for selective catheterization in 5 cases. In stereoangiography, depth was easily and accurately perceived in 27 of 37 cases; diplopia and/or ghost images were reported in 4 cases. A certain gain in the three-dimensional evaluation of the anatomy and relation between vessels and lesions was noted. As regards ergonomic considerations, polarized spectacles were not considered cumbersome. Visual fatigue and additional work were variously reported. Stereoshift tuning before X-ray acquisition was not judged to be a limiting factor. Conclusion: A twin-focus X-ray tube and a polarized shutter for stereoscopic display allowed effective real-time three-dimensional perception of angiographic images. Our clinical study suggests no clear medical interest for diagnostic examinations, but the field of interventional radiology needs to be investigated

  17. 21 CFR 886.1870 - Stereoscope.

    Science.gov (United States)

    2010-04-01

    ... exercises of eye muscles. (b) Classification. Class I (general controls). The AC-powered device and the... Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1870 Stereoscope. (a) Identification. A stereoscope is an AC...

  18. Analysis of brain activity and response during monoscopic and stereoscopic visualization

    Science.gov (United States)

    Calore, Enrico; Folgieri, Raffaella; Gadia, Davide; Marini, Daniele

    2012-03-01

    Stereoscopic visualization in cinematography and Virtual Reality (VR) creates an illusion of depth by means of two bidimensional images corresponding to different views of a scene. This perceptual trick is used to enhance the emotional response and the sense of presence and immersivity of the observers. An interesting question is if and how it is possible to measure and analyze the level of emotional involvement and attention of the observers during a stereoscopic visualization of a movie or of a virtual environment. The research aims represent a challenge, due to the large number of sensorial, physiological and cognitive stimuli involved. In this paper we begin this research by analyzing possible differences in the brain activity of subjects during the viewing of monoscopic or stereoscopic contents. To this aim, we have performed some preliminary experiments collecting electroencephalographic (EEG) data of a group of users using a Brain- Computer Interface (BCI) during the viewing of stereoscopic and monoscopic short movies in a VR immersive installation.

  19. SU-E-J-39: Comparison of PTV Margins Determined by In-Room Stereoscopic Image Guidance and by On-Board Cone Beam Computed Tomography Technique for Brain Radiotherapy Patients

    International Nuclear Information System (INIS)

    Ganesh, T; Paul, S; Munshi, A; Sarkar, B; Krishnankutty, S; Sathya, J; George, S; Jassal, K; Roy, S; Mohanti, B

    2014-01-01

    Purpose: Stereoscopic in room kV image guidance is a faster tool in daily monitoring of patient positioning. Our centre, for the first time in the world, has integrated such a solution from BrainLAB (ExacTrac) with Elekta's volumetric cone beam computed tomography (XVI). Using van Herk's formula, we compared the planning target volume (PTV) margins calculated by both these systems for patients treated with brain radiotherapy. Methods: For a total of 24 patients who received partial or whole brain radiotherapy, verification images were acquired for 524 treatment sessions by XVI and for 334 sessions by ExacTrac out of the total 547 sessions. Systematic and random errors were calculated in cranio-caudal, lateral and antero-posterior directions for both techniques. PTV margins were then determined using van Herk formula. Results: In the cranio-caudal direction, systematic error, random error and the calculated PTV margin were found to be 0.13 cm, 0.12 cm and 0.41 cm with XVI and 0.14 cm, 0.13 cm and 0.44 cm with ExacTrac. The corresponding values in lateral direction were 0.13 cm 0.1 cm and 0.4 cm with XVI and 0.13 cm, 0.12 cm and 0.42 cm with ExacTrac imaging. The same parameters for antero-posterior were for 0.1 cm, 0.11 cm and 0.34 cm with XVI and 0.13 cm, 0.16 cm and 0.43 cm with ExacTrac imaging. The margins estimated with the two imaging modalities were comparable within ± 1 mm limit. Conclusion: Verification of setup errors in the major axes by two independent imaging systems showed the results are comparable and within ± 1 mm. This implies that planar imaging based ExacTrac can yield equal accuracy in setup error determination as the time consuming volumetric imaging which is considered as the gold standard. Accordingly PTV margins estimated by this faster imaging technique can be confidently used in clinical setup

  20. Localisation accuracy of semi-dense monocular SLAM

    Science.gov (United States)

    Schreve, Kristiaan; du Plessies, Pieter G.; Rätsch, Matthias

    2017-06-01

    Understanding the factors that influence the accuracy of visual SLAM algorithms is very important for the future development of these algorithms. So far very few studies have done this. In this paper, a simulation model is presented and used to investigate the effect of the number of scene points tracked, the effect of the baseline length in triangulation and the influence of image point location uncertainty. It is shown that the latter is very critical, while the other all play important roles. Experiments with a well known semi-dense visual SLAM approach are also presented, when used in a monocular visual odometry mode. The experiments shows that not including sensor bias and scale factor uncertainty is very detrimental to the accuracy of the simulation results.

  1. High-Definition 3D Stereoscopic Microscope Display System for Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Yoo Kwan-Hee

    2010-01-01

    Full Text Available Biomedical research has been performed by using advanced information techniques, and micro-high-quality stereo images have been used by researchers and/or doctors for various aims in biomedical research and surgery. To visualize the stereo images, many related devices have been developed. However, the devices are difficult to learn for junior doctors and demanding to supervise for experienced surgeons. In this paper, we describe the development of a high-definition (HD three-dimensional (3D stereoscopic imaging display system for operating a microscope or experimenting on animals. The system consists of a stereoscopic camera part, image processing device for stereoscopic video recording, and stereoscopic display. In order to reduce eyestrain and viewer fatigue, we use a preexisting stereo microscope structure and polarized-light stereoscopic display method that does not reduce the quality of the stereo images. The developed system can overcome the discomfort of the eye piece and eyestrain caused by use over a long period of time.

  2. Depth Perception In Remote Stereoscopic Viewing Systems

    Science.gov (United States)

    Diner, Daniel B.; Von Sydow, Marika

    1989-01-01

    Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.

  3. Brief history of electronic stereoscopic displays

    Science.gov (United States)

    Lipton, Lenny

    2012-02-01

    A brief history of recent developments in electronic stereoscopic displays is given concentrating on products that have succeeded in the market place and hence have had a significant influence on future implementations. The concentration is on plano-stereoscopic (two-view) technology because it is now the dominant display modality in the marketplace. Stereoscopic displays were created for the motion picture industry a century ago, and this technology influenced the development of products for science and industry, which in turn influenced product development for entertainment.

  4. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    Science.gov (United States)

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  5. Taking space literally: reconceptualizing the effects of stereoscopic representation on user experience

    Directory of Open Access Journals (Sweden)

    Benny Liebold

    2013-03-01

    Full Text Available Recently, cinemas, home theater systems and game consoles have undergone a rapid evolution towards stereoscopic representation with recipients gradually becoming accustomed to these changes. Stereoscopy techniques in most media present two offset images separately to the left and right eye of the viewer (usually with the help of glasses separating both images resulting in the perception of three-dimensional depth. In contrast to these mass market techniques, true 3D volumetric displays or holograms that display an image in three full dimensions are relatively uncommon. The visual quality and visual comfort of stereoscopic representation is constantly being improved by the industry.

  6. Distributed Monocular SLAM for Indoor Map Building

    Directory of Open Access Journals (Sweden)

    Ruwan Egodagamage

    2017-01-01

    Full Text Available Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps, thus requiring a distributed computational framework. Each agent can generate its own local map, which can then be combined into a map covering a larger area. By doing so, they can cover a given environment faster than a single agent. Furthermore, they can interact with each other in the same environment, making this framework more practical, especially for collaborative applications such as augmented reality. One of the main challenges of distributed SLAM is identifying overlapping maps, especially when relative starting positions of agents are unknown. In this paper, we are proposing a system having multiple monocular agents, with unknown relative starting positions, which generates a semidense global map of the environment.

  7. Architecture for high performance stereoscopic game rendering on Android

    Science.gov (United States)

    Flack, Julien; Sanderson, Hugh; Shetty, Sampath

    2014-03-01

    Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption.

  8. Stereoscopic Feature Tracking System for Retrieving Velocity of Surface Waters

    Science.gov (United States)

    Zuniga Zamalloa, C. C.; Landry, B. J.

    2017-12-01

    The present work is concerned with the surface velocity retrieval of flows using a stereoscopic setup and finding the correspondence in the images via feature tracking (FT). The feature tracking provides a key benefit of substantially reducing the level of user input. In contrast to other commonly used methods (e.g., normalized cross-correlation), FT does not require the user to prescribe interrogation window sizes and removes the need for masking when specularities are present. The results of the current FT methodology are comparable to those obtained via Large Scale Particle Image Velocimetry while requiring little to no user input which allowed for rapid, automated processing of imagery.

  9. Remote non-invasive stereoscopic imaging of blood vessels: first in-vivo results of a new multispectral contrast enhancement technology

    NARCIS (Netherlands)

    Wieringa, F.P.; Mastik, F.; Cate, F.J. ten; Neumann, H.A.M.; Steen, A.F.W. van der

    2006-01-01

    We describe a contactless optical technique selectively enhancing superficial blood vessels below variously pigmented intact human skin by combining images in different spectral bands. Two CMOS-cameras, with apochromatic lenses and dual-band LED-arrays, simultaneously streamed Left (L) and Right (R)

  10. Stereo using monocular cues within the tensor voting framework.

    Science.gov (United States)

    Mordohai, Philippos; Medioni, Gérard

    2006-06-01

    We address the fundamental problem of matching in two static images. The remaining challenges are related to occlusion and lack of texture. Our approach addresses these difficulties within a perceptual organization framework, considering both binocular and monocular cues. Initially, matching candidates for all pixels are generated by a combination of matching techniques. The matching candidates are then embedded in disparity space, where perceptual organization takes place in 3D neighborhoods and, thus, does not suffer from problems associated with scanline or image neighborhoods. The assumption is that correct matches produce salient, coherent surfaces, while wrong ones do not. Matching candidates that are consistent with the surfaces are kept and grouped into smooth layers. Thus, we achieve surface segmentation based on geometric and not photometric properties. Surface overextensions, which are due to occlusion, can be corrected by removing matches whose projections are not consistent in color with their neighbors of the same surface in both images. Finally, the projections of the refined surfaces on both images are used to obtain disparity hypotheses for unmatched pixels. The final disparities are selected after a second tensor voting stage, during which information is propagated from more reliable pixels to less reliable ones. We present results on widely used benchmark stereo pairs.

  11. SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.

    Science.gov (United States)

    Chen, Long; Tang, Wen; John, Nigel W; Wan, Tao Ruan; Zhang, Jian Jun

    2018-05-01

    While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations

  12. Traveling via Rome through the Stereoscope: Reality, Memory, and Virtual Travel

    Directory of Open Access Journals (Sweden)

    Douglas M. Klahr

    2016-06-01

    Full Text Available Underwood and Underwood’s 'Rome through the Stereoscope' of 1902 was a landmark in stereoscopic photography publishing, both as an intense, visually immersive experience and as a cognitively demanding exercise. The set consisted of a guidebook, forty-six stereographs, and five maps whose notations enabled the reader/viewer to precisely replicate the location and orientation of the photographer at each site. Combined with the extensive narrative within the guidebook, the maps and images guided its users through the city via forty-six sites, whether as an example of armchair travel or an actual travel companion. The user’s experience is examined and analyzed within the following parameters: the medium of stereoscopic photography, narrative, geographical imagination, and memory, bringing forth issues of movement, survey and route frames of reference, orientation, visualization, immersion, and primary versus secondary memories. 'Rome through the Stereoscope' was an example of virtual travel, and the process of fusing dual images into one — stereoscopic synthesis — further demarcated the experience as a virtual environment.

  13. METHOD FOR DETERMINING THE SPATIAL COORDINATES IN THE ACTIVE STEREOSCOPIC SYSTEM

    Directory of Open Access Journals (Sweden)

    Valery V. Korotaev

    2014-11-01

    Full Text Available The paper deals with the structural scheme of active stereoscopic system and algorithm of its operation, providing the fast calculation of the spatial coordinates. The system includes two identical cameras, forming a stereo pair, and a laser scanner, which provides vertical scanning of the space before the system by the laser beam. A separate synchronizer provides synchronous operation of the two cameras. The developed algorithm of the system operation is implemented in MATLAB. In the proposed algorithm, the influence of background light is eliminated by interframe processing. The algorithm is based on precomputation of coordinates for epipolar lines and corresponding points in stereoscopic image. These data are used to quick calculation of the three-dimensional coordinates of points that form the three-dimensional images of objects. Experiment description on a physical model is given. Experimental results confirm the efficiency of the proposed active stereoscopic system and its operation algorithm. The proposed scheme of active stereoscopic system and calculating method for the spatial coordinates can be recommended for creation of stereoscopic systems, operating in real time and at high processing speed: devices for face recognition, systems for the position control of railway track, automobile active safety systems.

  14. Application of longitudinal magnification effect to magnification stereoscopic angiography. A new method of cerebral angiography

    International Nuclear Information System (INIS)

    Doi, K.; Rossmann, K.; Duda, E.E.

    1976-01-01

    A new method of stereoscopic cerebral angiography was developed which employs 2X radiographic magnification. In order to obtain the same depth perception in the object as with conventional contact stereoscopic angiography, one can make the x-ray exposures at two focal spot positions which are separated by only 1 inch, whereas the contact technique requires a separation of 4 inches. The smaller distance is possible because, with 2X magnification, the transverse detail in the object is magnified by a factor of two, but the longitudinal detail, which is related to the stereo effect, is magnified by a factor of four, due to the longitudinal magnification effect. The small focal spot separation results in advantages such as improved stereoscopic image detail, better image quality, and low radiation exposure to the patient

  15. Application of longitudinal magnification effect to magnification stereoscopic angiography. A new method of cerebral angiography

    Energy Technology Data Exchange (ETDEWEB)

    Doi, K.; Rossmann, K.; Duda, E.E.

    1976-01-01

    A new method of stereoscopic cerebral angiography was developed which employs 2X radiographic magnification. In order to obtain the same depth perception in the object as with conventional contact stereoscopic angiography, one can make the x-ray exposures at two focal spot positions which are separated by only 1 inch, whereas the contact technique requires a separation of 4 inches. The smaller distance is possible because, with 2X magnification, the transverse detail in the object is magnified by a factor of two, but the longitudinal detail, which is related to the stereo effect, is magnified by a factor of four, due to the longitudinal magnification effect. The small focal spot separation results in advantages such as improved stereoscopic image detail, better image quality, and low radiation exposure to the patient.

  16. Effect of monocular deprivation on rabbit neural retinal cell densities

    Directory of Open Access Journals (Sweden)

    Philip Maseghe Mwachaka

    2015-01-01

    Conclusion: In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  17. Two Eyes, 3D: Stereoscopic Design Principles

    Science.gov (United States)

    Price, Aaron; Subbarao, M.; Wyatt, R.

    2013-01-01

    Two Eyes, 3D is a NSF-funded research project about how people perceive highly spatial objects when shown with 2D or stereoscopic ("3D") representations. As part of the project, we produced a short film about SN 2011fe. The high definition film has been rendered in both 2D and stereoscopic formats. It was developed according to a set of stereoscopic design principles we derived from the literature and past experience producing and studying stereoscopic films. Study participants take a pre- and post-test that involves a spatial cognition assessment and scientific knowledge questions about Type-1a supernovae. For the evaluation, participants use iPads in order to record spatial manipulation of the device and look for elements of embodied cognition. We will present early results and also describe the stereoscopic design principles and the rationale behind them. All of our content and software is available under open source licenses. More information is at www.twoeyes3d.org.

  18. Effect of Monocular Deprivation on Rabbit Neural Retinal Cell Densities

    OpenAIRE

    Mwachaka, Philip Maseghe; Saidi, Hassan; Odula, Paul Ochieng; Mandela, Pamela Idenya

    2015-01-01

    Purpose: To describe the effect of monocular deprivation on densities of neural retinal cells in rabbits. Methods: Thirty rabbits, comprised of 18 subject and 12 control animals, were included and monocular deprivation was achieved through unilateral lid suturing in all subject animals. The rabbits were observed for three weeks. At the end of each week, 6 experimental and 3 control animals were euthanized, their retinas was harvested and processed for light microscopy. Photomicrographs of ...

  19. Stereoscopic HDTV Research at NHK Science and Technology Research Laboratories

    CERN Document Server

    Yamanoue, Hirokazu; Nojiri, Yuji

    2012-01-01

    This book focuses on the two psychological factors of naturalness and ease of viewing of three-dimensional high-definition television (3D HDTV) images. It has been said that distortions peculiar to stereoscopic images, such as the “puppet theater” effect or the “cardboard” effect, spoil the sense of presence. Whereas many earlier studies have focused on geometrical calculations about these distortions, this book instead describes the relationship between the naturalness of reproduced 3D HDTV images and the nonlinearity of depthwise reproduction. The ease of viewing of each scene is regarded as one of the causal factors of visual fatigue. Many of the earlier studies have been concerned with the accurate extraction of local parallax; however, this book describes the typical spatiotemporal distribution of parallax in 3D images. The purpose of the book is to examine the correlations between the psychological factors and amount of characteristics of parallax distribution in order to understand the characte...

  20. 21 CFR 886.1880 - Fusion and stereoscopic target.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Fusion and stereoscopic target. 886.1880 Section... (CONTINUED) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1880 Fusion and stereoscopic target. (a) Identification. A fusion and stereoscopic target is a device intended for use as a viewing object...

  1. Evaluating visual discomfort in stereoscopic projection-based CAVE system with a close viewing distance

    Science.gov (United States)

    Song, Weitao; Weng, Dongdong; Feng, Dan; Li, Yuqian; Liu, Yue; Wang, Yongtian

    2015-05-01

    As one of popular immersive Virtual Reality (VR) systems, stereoscopic cave automatic virtual environment (CAVE) system is typically consisted of 4 to 6 3m-by-3m sides of a room made of rear-projected screens. While many endeavors have been made to reduce the size of the projection-based CAVE system, the issue of asthenopia caused by lengthy exposure to stereoscopic images in such CAVE with a close viewing distance was seldom tangled. In this paper, we propose a light-weighted approach which utilizes a convex eyepiece to reduce visual discomfort induced by stereoscopic vision. An empirical experiment was conducted to examine the feasibility of convex eyepiece in a large depth of field (DOF) at close viewing distance both objectively and subjectively. The result shows the positive effects of convex eyepiece on the relief of eyestrain.

  2. Stereoscopic Vascular Models of the Head and Neck: A Computed Tomography Angiography Visualization

    Science.gov (United States)

    Cui, Dongmei; Lynch, James C.; Smith, Andrew D.; Wilson, Timothy D.; Lehman, Michael N.

    2016-01-01

    Computer-assisted 3D models are used in some medical and allied health science schools; however, they are often limited to online use and 2D flat screen-based imaging. Few schools take advantage of 3D stereoscopic learning tools in anatomy education and clinically relevant anatomical variations when teaching anatomy. A new approach to teaching…

  3. Stereoscopic 3D display with dynamic optical correction for recovering from asthenopia

    Science.gov (United States)

    Shibata, Takashi; Kawai, Takashi; Otsuki, Masaki; Miyake, Nobuyuki; Yoshihara, Yoshihiro; Iwasaki, Tsuneto

    2005-03-01

    The purpose of this study was to consider a practical application of a newly developed stereoscopic 3-D display that solves the problem of discrepancy between accommodation and convergence. The display uses dynamic optical correction to reduce the discrepancy, and can present images as if they are actually remote objects. The authors thought the display may assist in recovery from asthenopia, which is often caused when the eyes focus on a nearby object for a long time, such as in VDT (Visual Display Terminal) work. In general, recovery from asthenopia, and especially accommodative asthenopia, is achieved by focusing on distant objects. In order to verify this hypothesis, the authors performed visual acuity tests using Landolt rings before and after presenting stereoscopic 3-D images, and evaluated the degree of recovery from asthenopia. The experiment led to three main conclusions: (1) Visual acuity rose after viewing stereoscopic 3-D images on the developed display. (2) Recovery from asthenopia was particularly effective for the dominant eye in comparison with the other eye. (3) Interviews with the subjects indicated that the Landolt rings were particularly clear after viewing the stereoscopic 3-D images.

  4. Measurement of mean rotation and strain-rate tensors by using stereoscopic PIV

    DEFF Research Database (Denmark)

    Özcan, Oktay; Meyer, Knud Erik; Larsen, Poul Scheel

    2005-01-01

    A technique is described for measuring the mean velocity gradient (rate-of-displacement) tensor by using a conventional stereoscopic particle image velocimetry (SPIV) system. Planar measurement of the mean vorticity vector, rate-of-rotation and rate-of-strain tensors and the production of turbule...

  5. Binocular contrast discrimination needs monocular multiplicative noise

    Science.gov (United States)

    Ding, Jian; Levi, Dennis M.

    2016-01-01

    The effects of signal and noise on contrast discrimination are difficult to separate because of a singularity in the signal-detection-theory model of two-alternative forced-choice contrast discrimination (Katkov, Tsodyks, & Sagi, 2006). In this article, we show that it is possible to eliminate the singularity by combining that model with a binocular combination model to fit monocular, dichoptic, and binocular contrast discrimination. We performed three experiments using identical stimuli to measure the perceived phase, perceived contrast, and contrast discrimination of a cyclopean sine wave. In the absence of a fixation point, we found a binocular advantage in contrast discrimination both at low contrasts (discrimination mechanisms: a nonlinear contrast transducer and multiplicative noise (MN). A binocular combination model (the DSKL model; Ding, Klein, & Levi, 2013b) was first fitted to both the perceived-phase and the perceived-contrast data sets, then combined with either the nonlinear contrast transducer or the MN mechanism to fit the contrast-discrimination data. We found that the best model combined the DSKL model with early MN. Model simulations showed that, after going through interocular suppression, the uncorrelated noise in the two eyes became anticorrelated, resulting in less binocular noise and therefore a binocular advantage in the discrimination task. Combining a nonlinear contrast transducer or MN with a binocular combination model (DSKL) provides a powerful method for evaluating the two putative contrast-discrimination mechanisms. PMID:26982370

  6. Matching and correlation computations in stereoscopic depth perception.

    Science.gov (United States)

    Doi, Takahiro; Tanabe, Seiji; Fujita, Ichiro

    2011-03-02

    A fundamental task of the visual system is to infer depth by using binocular disparity. To encode binocular disparity, the visual cortex performs two distinct computations: one detects matched patterns in paired images (matching computation); the other constructs the cross-correlation between the images (correlation computation). How the two computations are used in stereoscopic perception is unclear. We dissociated their contributions in near/far discrimination by varying the magnitude of the disparity across separate sessions. For small disparity (0.03°), subjects performed at chance level to a binocularly opposite-contrast (anti-correlated) random-dot stereogram (RDS) but improved their performance with the proportion of contrast-matched (correlated) dots. For large disparity (0.48°), the direction of perceived depth reversed with an anti-correlated RDS relative to that for a correlated one. Neither reversed nor normal depth was perceived when anti-correlation was applied to half of the dots. We explain the decision process as a weighted average of the two computations, with the relative weight of the correlation computation increasing with the disparity magnitude. We conclude that matching computation dominates fine depth perception, while both computations contribute to coarser depth perception. Thus, stereoscopic depth perception recruits different computations depending on the disparity magnitude.

  7. Virtual and stereoscopic anatomy: when virtual reality meets medical education.

    Science.gov (United States)

    de Faria, Jose Weber Vieira; Teixeira, Manoel Jacobsen; de Moura Sousa Júnior, Leonardo; Otoch, Jose Pinhata; Figueiredo, Eberval Gadelha

    2016-11-01

    OBJECTIVE The authors sought to construct, implement, and evaluate an interactive and stereoscopic resource for teaching neuroanatomy, accessible from personal computers. METHODS Forty fresh brains (80 hemispheres) were dissected. Images of areas of interest were captured using a manual turntable and processed and stored in a 5337-image database. Pedagogic evaluation was performed in 84 graduate medical students, divided into 3 groups: 1 (conventional method), 2 (interactive nonstereoscopic), and 3 (interactive and stereoscopic). The method was evaluated through a written theory test and a lab practicum. RESULTS Groups 2 and 3 showed the highest mean scores in pedagogic evaluations and differed significantly from Group 1 (p 0.05). Size effects, measured as differences in scores before and after lectures, indicate the effectiveness of the method. ANOVA results showed significant difference (p < 0.05) between groups, and the Tukey test showed statistical differences between Group 1 and the other 2 groups (p < 0.05). No statistical differences between Groups 2 and 3 were found in the practicum. However, there were significant differences when Groups 2 and 3 were compared with Group 1 (p < 0.05). CONCLUSIONS The authors conclude that this method promoted further improvement in knowledge for students and fostered significantly higher learning when compared with traditional teaching resources.

  8. Stereoscopic display in a slot machine

    Science.gov (United States)

    Laakso, M.

    2012-03-01

    This paper reports the results of a user trial with a slot machine equipped with a stereoscopic display. The main research question was to find out what kind of added value does stereoscopic 3D (S-3D) bring to slot games? After a thorough literature survey, a novel gaming platform was designed and implemented. Existing multi-game slot machine "Nova" was converted to "3DNova" by replacing the monitor with an S-3D display and converting six original games to S-3D format. To evaluate the system, several 3DNova machines were put available for players for four months. Both qualitative and quantitative analysis was carried out from statistical values, questionnaires and observations. According to the results, people find the S-3D concept interesting but the technology is not optimal yet. Young adults and adults were fascinated by the system, older people were more cautious. Especially the need to wear stereoscopic glasses provide a challenge; ultimate system would probably use autostereoscopic technology. Also the games should be designed to utilize its full power. The main contributions of this paper are lessons learned from creating an S-3D slot machine platform and novel information about human factors related to stereoscopic slot machine gaming.

  9. Matte painting in stereoscopic synthetic imagery

    Science.gov (United States)

    Eisenmann, Jonathan; Parent, Rick

    2010-02-01

    While there have been numerous studies concerning human perception in stereoscopic environments, rules of thumb for cinematography in stereoscopy have not yet been well-established. To that aim, we present experiments and results of subject testing in a stereoscopic environment, similar to that of a theater (i.e. large flat screen without head-tracking). In particular we wish to empirically identify thresholds at which different types of backgrounds, referred to in the computer animation industry as matte paintings, can be used while still maintaining the illusion of seamless perspective and depth for a particular scene and camera shot. In monoscopic synthetic imagery, any type of matte painting that maintains proper perspective lines, depth cues, and coherent lighting and textures saves in production costs while still maintaining the illusion of an alternate cinematic reality. However, in stereoscopic synthetic imagery, a 2D matte painting that worked in monoscopy may fail to provide the intended illusion of depth because the viewer has added depth information provided by stereopsis. We intend to observe two stereoscopic perceptual thresholds in this study which will provide practical guidelines indicating when to use each of three types of matte paintings. We ran subject tests in two virtual testing environments, each with varying conditions. Data were collected showing how the choices of the users matched the correct response, and the resulting perceptual threshold patterns are discussed below.

  10. Visual discomfort in stereoscopic displays : a review

    NARCIS (Netherlands)

    Lambooij, M.T.M.; IJsselsteijn, W.A.; Heynderickx, I.E.J.; Woods, A.J.; Merritt, J.O.; Bolas, M.T.; McDowall, I.E.

    2007-01-01

    Visual discomfort has been the subject of considerable research in relation to stereoscopic and autostereoscopic displays, but remains an ambiguous concept used to denote a variety of subjective symptoms potentially related to different underlying processes. In this paper we clarify the importance

  11. Visual discomfort in stereoscopic dsplays : A review

    NARCIS (Netherlands)

    Lambooij, M.T.M.; IJsselsteijn, W.; Heynderickx, I.

    2007-01-01

    Visual discomfort has been the subject of considerable research in relation to stereoscopic and autostereoscopic displays, but remains an ambiguous concept used to denote a variety of subjective symptoms potentially related to different underlying processes. In this paper we clarify the importance

  12. Teaching with Stereoscopic Video: Opportunities and Challenges

    Science.gov (United States)

    Variano, Evan

    2017-11-01

    I will present my work on creating stereoscopic videos for fluid pedagogy. I discuss a variety of workflows for content creation and a variety of platforms for content delivery. I review the qualitative lessons learned when teaching with this material, and discuss outlook for the future. This work was partially supported by the NSF award ENG-1604026 and the UC Berkeley Student Technology Fund.

  13. Relating binocular and monocular vision in strabismic and anisometropic amblyopia.

    Science.gov (United States)

    Agrawal, Ritwick; Conner, Ian P; Odom, J V; Schwartz, Terry L; Mendola, Janine D

    2006-06-01

    To examine deficits in monocular and binocular vision in adults with amblyopia and to test the following 2 hypotheses: (1) Regardless of clinical subtype, the degree of impairment in binocular integration predicts the pattern of monocular acuity deficits. (2) Subjects who lack binocular integration exhibit the most severe interocular suppression. Seven subjects with anisometropia, 6 subjects with strabismus, and 7 control subjects were tested. Monocular tests included Snellen acuity, grating acuity, Vernier acuity, and contrast sensitivity. Binocular tests included Titmus stereo test, binocular motion integration, and dichoptic contrast masking. As expected, both groups showed deficits in monocular acuity, with subjects with strabismus showing greater deficits in Vernier acuity. Both amblyopic groups were then characterized according to the degree of residual stereoacuity and binocular motion integration ability, and 67% of subjects with strabismus compared with 29% of subjects with anisometropia were classified as having "nonbinocular" vision according to our criterion. For this nonbinocular group, Vernier acuity is most impaired. In addition, the nonbinocular group showed the most dichoptic contrast masking of the amblyopic eye and the least dichoptic contrast masking of the fellow eye. The degree of residual binocularity and interocular suppression predicts monocular acuity and may be a significant etiological mechanism of vision loss.

  14. Separating monocular and binocular neural mechanisms mediating chromatic contextual interactions.

    Science.gov (United States)

    D'Antona, Anthony D; Christiansen, Jens H; Shevell, Steven K

    2014-04-17

    When seen in isolation, a light that varies in chromaticity over time is perceived to oscillate in color. Perception of that same time-varying light may be altered by a surrounding light that is also temporally varying in chromaticity. The neural mechanisms that mediate these contextual interactions are the focus of this article. Observers viewed a central test stimulus that varied in chromaticity over time within a larger surround that also varied in chromaticity at the same temporal frequency. Center and surround were presented either to the same eye (monocular condition) or to opposite eyes (dichoptic condition) at the same frequency (3.125, 6.25, or 9.375 Hz). Relative phase between center and surround modulation was varied. In both the monocular and dichoptic conditions, the perceived modulation depth of the central light depended on the relative phase of the surround. A simple model implementing a linear combination of center and surround modulation fit the measurements well. At the lowest temporal frequency (3.125 Hz), the surround's influence was virtually identical for monocular and dichoptic conditions, suggesting that at this frequency, the surround's influence is mediated primarily by a binocular neural mechanism. At higher frequencies, the surround's influence was greater for the monocular condition than for the dichoptic condition, and this difference increased with temporal frequency. Our findings show that two separate neural mechanisms mediate chromatic contextual interactions: one binocular and dominant at lower temporal frequencies and the other monocular and dominant at higher frequencies (6-10 Hz).

  15. Monocular and binocular visual impairment in the UK Biobank study: prevalence, associations and diagnoses.

    Science.gov (United States)

    McKibbin, Martin; Farragher, Tracey M; Shickle, Darren

    2018-01-01

    To determine the prevalence of, associations with and diagnoses leading to mild visual impairment or worse (logMAR >0.3) in middle-aged adults in the UK Biobank study. Prevalence estimates for monocular and binocular visual impairment were determined for the UK Biobank participants with fundus photographs and spectral domain optical coherence tomography images. Associations with socioeconomic, biometric, lifestyle and medical variables were investigated for cases with visual impairment and matched controls, using multinomial logistic regression models. Self-reported eye history and image grading results were used to identify the primary diagnoses leading to visual impairment for a sample of 25% of cases. For the 65 033 UK Biobank participants, aged 40-69 years and with fundus images, 6682 (10.3%) and 1677 (2.6%) had mild visual impairment or worse in one or both eyes, respectively. Increasing deprivation, age and ethnicity were independently associated with both monocular and binocular visual impairment. No primary diagnosis for the recorded level of visual impairment could be identified for 49.8% of eyes. The most common identifiable diagnoses leading to visual impairment were cataract, amblyopia, uncorrected refractive error and vitreoretinal interface abnormalities. The prevalence of visual impairment in the UK Biobank study cohort is lower than for population-based studies from other industrialised countries. Monocular and binocular visual impairment are associated with increasing deprivation, age and ethnicity. The UK Biobank dataset does not allow confident identification of the causes of visual impairment, and the results may not be applicable to the wider UK population.

  16. Monocular and binocular visual impairment in the UK Biobank study: prevalence, associations and diagnoses

    Science.gov (United States)

    Farragher, Tracey M; Shickle, Darren

    2018-01-01

    Objective To determine the prevalence of, associations with and diagnoses leading to mild visual impairment or worse (logMAR >0.3) in middle-aged adults in the UK Biobank study. Methods and analysis Prevalence estimates for monocular and binocular visual impairment were determined for the UK Biobank participants with fundus photographs and spectral domain optical coherence tomography images. Associations with socioeconomic, biometric, lifestyle and medical variables were investigated for cases with visual impairment and matched controls, using multinomial logistic regression models. Self-reported eye history and image grading results were used to identify the primary diagnoses leading to visual impairment for a sample of 25% of cases. Results For the 65 033 UK Biobank participants, aged 40–69 years and with fundus images, 6682 (10.3%) and 1677 (2.6%) had mild visual impairment or worse in one or both eyes, respectively. Increasing deprivation, age and ethnicity were independently associated with both monocular and binocular visual impairment. No primary diagnosis for the recorded level of visual impairment could be identified for 49.8% of eyes. The most common identifiable diagnoses leading to visual impairment were cataract, amblyopia, uncorrected refractive error and vitreoretinal interface abnormalities. Conclusions The prevalence of visual impairment in the UK Biobank study cohort is lower than for population-based studies from other industrialised countries. Monocular and binocular visual impairment are associated with increasing deprivation, age and ethnicity. The UK Biobank dataset does not allow confident identification of the causes of visual impairment, and the results may not be applicable to the wider UK population. PMID:29657974

  17. Monocular channels have a functional role in endogenous orienting.

    Science.gov (United States)

    Saban, William; Sekely, Liora; Klein, Raymond M; Gabay, Shai

    2018-03-01

    The literature has long emphasized the role of higher cortical structures in endogenous orienting. Based on evolutionary explanation and previous data, we explored the possibility that lower monocular channels may also have a functional role in endogenous orienting of attention. Sensitive behavioral manipulation was used to probe the contribution of monocularly segregated regions in a simple cue - target detection task. A central spatially informative cue, and its ensuing target, were presented to the same or different eyes at varying cue-target intervals. Results indicated that the onset of endogenous orienting was apparent earlier when the cue and target were presented to the same eye. The data provides converging evidence for the notion that endogenous facilitation is modulated by monocular portions of the visual stream. This, in turn, suggests that higher cortical mechanisms are not exclusively responsible for endogenous orienting, and that a dynamic interaction between higher and lower neural levels, might be involved. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    OpenAIRE

    Edmundo Guerra; Rodrigo Munguia; Yolanda Bolea; Antoni Grau

    2013-01-01

    Simultaneous Location and Mapping (SLAM) is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D) Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hyp...

  19. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    Science.gov (United States)

    Boulos, Maged N.K.; Robinson, Larry R.

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  20. A stereoscopic television system for reactor inspection

    International Nuclear Information System (INIS)

    Friend, D.B.; Jones, A.

    1980-03-01

    A stereoscopic television system suitable for reactor inspection has been developed. Right and left eye views, obtained from two conventional black and white cameras, are displayed by the anaglyph technique and observers wear appropriately coloured viewing spectacles. All camera functions, such as zoom, focus and toe-in are remotely controlled. A laboratory experiment is described which demonstrates the increase in spatial awareness afforded by the use of stereo television and illustrates its potential in the supervision of remote handling tasks. Typical depth resolutions of 3mm at 1m and 10mm at 2m have been achieved with the reactor instrument. Trials undertaken during routine inspection at Oldbury Power Station in June 1978 are described. They demonstrate that stereoscopic television can indeed improve the convenience of remote handling and that the added display realism is beneficial in visual inspection. (author)

  1. Use of camera drive in stereoscopic display of learning contents of introductory physics

    Science.gov (United States)

    Matsuura, Shu

    2011-03-01

    Simple 3D physics simulations with stereoscopic display were created for a part of introductory physics e-Learning. First, cameras to see the 3D world can be made controllable by the user. This enabled to observe the system and motions of objects from any position in the 3D world. Second, cameras were made attachable to one of the moving object in the simulation so as to observe the relative motion of other objects. By this option, it was found that users perceive the velocity and acceleration more sensibly on stereoscopic display than on non-stereoscopic 3D display. Simulations were made using Adobe Flash ActionScript, and Papervison 3D library was used to render the 3D models in the flash web pages. To display the stereogram, two viewports from virtual cameras were displayed in parallel in the same web page. For observation of stereogram, the images of two viewports were superimposed by using 3D stereogram projection box (T&TS CO., LTD.), and projected on an 80-inch screen. The virtual cameras were controlled by keyboard and also by Nintendo Wii remote controller buttons. In conclusion, stereoscopic display offers learners more opportunities to play with the simulated models, and to perceive the characteristics of motion better.

  2. Subjective experiences of watching stereoscopic Avatar and U2 3D in a cinema

    Science.gov (United States)

    Pölönen, Monika; Salmimaa, Marja; Takatalo, Jari; Häkkinen, Jukka

    2012-01-01

    A stereoscopic 3-D version of the film Avatar was shown to 85 people who subsequently answered questions related to sickness, visual strain, stereoscopic image quality, and sense of presence. Viewing Avatar for 165 min induced some symptoms of visual strain and sickness, but the symptom levels remained low. A comparison between Avatar and previously published results for the film U2 3D showed that sickness and visual strain levels were similar despite the films' runtimes. The genre of the film had a significant effect on the viewers' opinions and sense of presence. Avatar, which has been described as a combination of action, adventure, and sci-fi genres, was experienced as more immersive and engaging than the music documentary U2 3D. However, participants in both studies were immersed, focused, and absorbed in watching the stereoscopic 3-D (S3-D) film and were pleased with the film environments. The results also showed that previous stereoscopic 3-D experience significantly reduced the amount of reported eye strain and complaints about the weight of the viewing glasses.

  3. Short-Term Monocular Deprivation Enhances Physiological Pupillary Oscillations

    Directory of Open Access Journals (Sweden)

    Paola Binda

    2017-01-01

    Full Text Available Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark and task requirements (minimizing body and gaze movements, slow pupil oscillations, “hippus,” spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry. This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

  4. Monocular SLAM for autonomous robots with enhanced features initialization.

    Science.gov (United States)

    Guerra, Edmundo; Munguia, Rodrigo; Grau, Antoni

    2014-04-02

    This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI) framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM), a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

  5. A flexible approach to light pen calibration for a monocular-vision-based coordinate measuring system

    International Nuclear Information System (INIS)

    Fu, Shuai; Zhang, Liyan; Ye, Nan; Liu, Shenglan; Zhang, WeiZhong

    2014-01-01

    A monocular-vision-based coordinate measuring system (MVB-CMS) obtains the 3D coordinates of the probe tip center of a light pen by analyzing the monocular image of the target points on the light pen. The light pen calibration, including the target point calibration and the probe tip center calibration, is critical to guarantee the accuracy of the MVB-CMS. The currently used method resorts to special equipment to calibrate the feature points on the light pen in a separate offsite procedure and uses the system camera to calibrate the probe tip center onsite. Instead, a complete onsite light pen calibration method is proposed in this paper. It needs only several auxiliary target points with the same visual features of the light pen targets and two or more cone holes with known distance(s). The target point calibration and the probe tip center calibration are jointly implemented by simply taking two groups of images of the light pen with the camera of the system. The proposed method requires no extra equipment other than the system camera for the calibration, so it is easier to implement and flexible for use. It has been incorporated in a large field-of-view MVB-CMS, which uses active luminous infrared LEDs as the target points. Experimental results demonstrate the accuracy and effectiveness of the proposed method. (paper)

  6. A Case of Complete Recovery of Fluctuating Monocular Blindness Following Endovascular Treatment in Internal Carotid Artery Dissection.

    Science.gov (United States)

    Kim, Ki-Tae; Baik, Seung Guk; Park, Kyung-Pil; Park, Min-Gyu

    2015-09-01

    Monocular blindness may appear as the first symptom of internal carotid artery dissection (ICAD). However, there have been no reports that monocular visual loss repeatedly occurs and disappears in response to postural change in ICAD. A 33-year-old woman presented with transient monocular blindness (TMB) following acute-onset headache. TMB repeatedly occurred in response to postural change. Two days later, she experienced transient dysarthria and right hemiparesis in upright position. Pupil size and light reflex were normal, but a relative afferent pupillary defect was positive in the left eye. Diffusion-weighted imaging showed no acute lesion, but perfusion-weighted imaging showed perfusion delay in the left ICA territory. Digital subtraction angiography demonstrated a false lumen and an intraluminal filling defect in proximal segment of the left ICA. Carotid stenting was performed urgently. After carotid stenting, left relative afferent pupillary defect disappeared and TMB was not provoked anymore by upright posture. At discharge, left visual acuity was completely normalized. Because fluctuating visual symptoms in the ICAD may be associated with hemodynamically unstable status, assessment of the perfusion status should be done quickly. Carotid stenting may be helpful to improve the fluctuating visual symptoms and hemodynamically unstable status in selected patient with the ICAD. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  7. Use of the stereoscopic virtual reality display system for the detection and characterization of intracranial aneurysms: A Icomparison with conventional computed tomography workstation and 3D rotational angiography.

    Science.gov (United States)

    Liu, Xiujuan; Tao, Haiquan; Xiao, Xigang; Guo, Binbin; Xu, Shangcai; Sun, Na; Li, Maotong; Xie, Li; Wu, Changjun

    2018-07-01

    This study aimed to compare the diagnostic performance of the stereoscopic virtual reality display system with the conventional computed tomography (CT) workstation and three-dimensional rotational angiography (3DRA) for intracranial aneurysm detection and characterization, with a focus on small aneurysms and those near the bone. First, 42 patients with suspected intracranial aneurysms underwent both 256-row CT angiography (CTA) and 3DRA. Volume rendering (VR) images were captured using the conventional CT workstation. Next, VR images were transferred to the stereoscopic virtual reality display system. Two radiologists independently assessed the results that were obtained using the conventional CT workstation and stereoscopic virtual reality display system. The 3DRA results were considered as the ultimate reference standard. Based on 3DRA images, 38 aneurysms were confirmed in 42 patients. Two cases were misdiagnosed and 1 was missed when the traditional CT workstation was used. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of the conventional CT workstation were 94.7%, 85.7%, 97.3%, 75%, and99.3%, respectively, on a per-aneurysm basis. The stereoscopic virtual reality display system missed a case. The sensitivity, specificity, PPV, NPV, and accuracy of the stereoscopic virtual reality display system were 100%, 85.7%, 97.4%, 100%, and 97.8%, respectively. No difference was observed in the accuracy of the traditional CT workstation, stereoscopic virtual reality display system, and 3DRA in detecting aneurysms. The stereoscopic virtual reality display system has some advantages in detecting small aneurysms and those near the bone. The virtual reality stereoscopic vision obtained through the system was found as a useful tool in intracranial aneurysm diagnosis and pre-operative 3D imaging. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Human factors involved in perception and action in a natural stereoscopic world: an up-to-date review with guidelines for stereoscopic displays and stereoscopic virtual reality (VR)

    Science.gov (United States)

    Perez-Bayas, Luis

    2001-06-01

    In stereoscopic perception of a three-dimensional world, binocular disparity might be thought of as the most important cue to 3D depth perception. Nevertheless, in reality there are many other factors involved before the 'final' conscious and subconscious stereoscopic perception, such as luminance, contrast, orientation, color, motion, and figure-ground extraction (pop-out phenomenon). In addition, more complex perceptual factors exist, such as attention and its duration (an equivalent of 'brain zooming') in relation to physiological central vision, In opposition to attention to peripheral vision and the brain 'top-down' information in relation to psychological factors like memory of previous experiences and present emotions. The brain's internal mapping of a pure perceptual world might be different from the internal mapping of a visual-motor space, which represents an 'action-directed perceptual world.' In addition, psychological factors (emotions and fine adjustments) are much more involved in a stereoscopic world than in a flat 2D-world, as well as in a world using peripheral vision (like VR, using a curved perspective representation, and displays, as natural vision does) as opposed to presenting only central vision (bi-macular stereoscopic vision) as in the majority of typical stereoscopic displays. Here is presented the most recent and precise information available about the psycho-neuro- physiological factors involved in the perception of stereoscopic three-dimensional world, with an attempt to give practical, functional, and pertinent guidelines for building more 'natural' stereoscopic displays.

  9. Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education

    Science.gov (United States)

    Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin

    2006-02-01

    Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic

  10. Digital stereoscopic photography using StereoData Maker

    Science.gov (United States)

    Toeppen, John; Sykes, David

    2009-02-01

    Stereoscopic digital photography has become much more practical with the use of USB wired connections between a pair of Canon cameras using StereoData Maker software for precise synchronization. StereoPhoto Maker software is now used to automatically combine and align right and left image files to produce a stereo pair. Side by side images are saved as pairs and may be viewed using software that converts the images into the preferred viewing format at the time of display. Stereo images may be shared on the internet, displayed on computer monitors, autostereo displays, viewed on high definition 3D TVs, or projected for a group. Stereo photographers are now free to control composition using point and shoot settings, or are able to control shutter speed, aperture, focus, ISO, and zoom. The quality of the output depends on the developed skills of the photographer as well as their understanding of the software, human vision and the geometry they choose for their cameras and subjects. Observers of digital stereo images can zoom in for greater detail and scroll across large panoramic fields with a few keystrokes. The art, science, and methods of taking, creating and viewing digital stereo photos are presented in a historic and developmental context in this paper.

  11. Application of a stereoscopic digital subtraction angiography approach to blood flow analysis

    International Nuclear Information System (INIS)

    Fencil, L.E.; Doi, K.; Hoffmann, K.R.

    1986-01-01

    The authors are developing a stereoscopic digital subtraction angiographic (DSA) approach for accurate measurement of the size, magnification factor, orientation, and blood flow of a selected vessel segment. We employ a Siemens Digitron 2 and a Stereolix x-ray tube with a 25-mm tube shift. Absolute vessel sizes in each stereoscopic image are determined using the magnification factor and an iterative deconvolution technique employing the LSF of the DSA system. From data on vessel diameter and three-dimensional orientation, the effective attenuation coefficient of the diluted contrast medium can be determined, thus allowing accurate blood flow analysis in high-frame-rate DSA images. The accuracy and precision of the approach will be studied using both static and dynamic phantoms

  12. A novel no-reference objective stereoscopic video quality assessment method based on visual saliency analysis

    Science.gov (United States)

    Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin

    2017-07-01

    This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.

  13. An MR-compatible stereoscopic in-room 3D display for MR-guided interventions.

    Science.gov (United States)

    Brunner, Alexander; Groebner, Jens; Umathum, Reiner; Maier, Florian; Semmler, Wolfhard; Bock, Michael

    2014-08-01

    A commercial three-dimensional (3D) monitor was modified for use inside the scanner room to provide stereoscopic real-time visualization during magnetic resonance (MR)-guided interventions, and tested in a catheter-tracking phantom experiment at 1.5 T. Brightness, uniformity, radio frequency (RF) emissions and MR image interferences were measured. Due to modifications, the center luminance of the 3D monitor was reduced by 14%, and the addition of a Faraday shield further reduced the remaining luminance by 31%. RF emissions could be effectively shielded; only a minor signal-to-noise ratio (SNR) decrease of 4.6% was observed during imaging. During the tracking experiment, the 3D orientation of the catheter and vessel structures in the phantom could be visualized stereoscopically.

  14. An HTML Tool for Production of Interactive Stereoscopic Compositions.

    Science.gov (United States)

    Chistyakov, Alexey; Soto, Maria Teresa; Martí, Enric; Carrabina, Jordi

    2016-12-01

    The benefits of stereoscopic vision in medical applications were appreciated and have been thoroughly studied for more than a century. The usage of the stereoscopic displays has a proven positive impact on performance in various medical tasks. At the same time the market of 3D-enabled technologies is blooming. New high resolution stereo cameras, TVs, projectors, monitors, and head mounted displays become available. This equipment, completed with a corresponding application program interface (API), could be relatively easy implemented in a system. Such complexes could open new possibilities for medical applications exploiting the stereoscopic depth. This work proposes a tool for production of interactive stereoscopic graphical user interfaces, which could represent a software layer for web-based medical systems facilitating the stereoscopic effect. Further the tool's operation mode and the results of the conducted subjective and objective performance tests will be exposed.

  15. Disseminated neurocysticercosis presenting as isolated acute monocular painless vision loss

    Directory of Open Access Journals (Sweden)

    Gaurav M Kasundra

    2014-01-01

    Full Text Available Neurocysticercosis, the most common parasitic infection of the nervous system, is known to affect the brain, eyes, muscular tissues and subcutaneous tissues. However, it is very rare for patients with ocular cysts to have concomitant cerebral cysts. Also, the dominant clinical manifestation of patients with cerebral cysts is either seizures or headache. We report a patient who presented with acute monocular painless vision loss due to intraocular submacular cysticercosis, who on investigation had multiple cerebral parenchymal cysticercal cysts, but never had any seizures. Although such a vision loss after initiation of antiparasitic treatment has been mentioned previously, acute monocular vision loss as the presenting feature of ocular cysticercosis is rare. We present a brief review of literature along with this case report.

  16. [3-D reconstruction of the breast implants from isocentric stereoscopic x-ray images for the application monitoring and irradiation planning of a remote-controlled interstitial afterloading method].

    Science.gov (United States)

    Löffler, E; Sauer, O

    1988-01-01

    An individual irradiation planning and application monitoring by ISXP is presented for a remote-controlled interstitial afterloading technique using 192Ir wires which is applied in breast-preserving radiotherapy. The errors of reconstruction of the implants are discussed. The consideration of errors for ISXP can be extended to other stereoscopic methods. In this case the quality considerations made by other authors have to be enlarged. The maximum reconstruction error was investigated for a given digitalization precision, focus size, and object blur by patient's movements in dependence on the deviation angle. The optimum deviation angle is about 45 degrees, depending on the importance given to the individual parts and almost without being influenced by the relation between the isocenter-film and the focus-isocenter distances. In case of an optimized deviation angle, a displacement of an implant point of 1 mm leads to a maximum reconstruction error of 2 mm. The dosage is made according to the Paris system. If the circumcircle radius of the application triangle is modified by 1 mm, a dosage modification of 14% will be the consequence in case of very short wires and a small side length. A verification in a phantom showed a positioning error below 0.5 mm. The dosage error is 2% due to the mutual compensation of the direction-isotropic reconstruction errors of the needles the number of which is between seven and nine.

  17. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse-Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  18. Effect of Monocular Deprivation on Rabbit Neural Retinal Cell Densities.

    Science.gov (United States)

    Mwachaka, Philip Maseghe; Saidi, Hassan; Odula, Paul Ochieng; Mandela, Pamela Idenya

    2015-01-01

    To describe the effect of monocular deprivation on densities of neural retinal cells in rabbits. Thirty rabbits, comprised of 18 subject and 12 control animals, were included and monocular deprivation was achieved through unilateral lid suturing in all subject animals. The rabbits were observed for three weeks. At the end of each week, 6 experimental and 3 control animals were euthanized, their retinas was harvested and processed for light microscopy. Photomicrographs of the retina were taken and imported into FIJI software for analysis. Neural retinal cell densities of deprived eyes were reduced along with increasing period of deprivation. The percentage of reductions were 60.9% (P < 0.001), 41.6% (P = 0.003), and 18.9% (P = 0.326) for ganglion, inner nuclear, and outer nuclear cells, respectively. In non-deprived eyes, cell densities in contrast were increased by 116% (P < 0.001), 52% (P < 0.001) and 59.6% (P < 0.001) in ganglion, inner nuclear, and outer nuclear cells, respectively. In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  19. Optimal control of set-up margins and internal margins for intra and extracranial radiotherapy using stereoscopic kilo voltage imaging; Controle optimal des incertitudes de positionnement externes et internes lors d'irradiations craniennes et extracraniennes par imagerie stereoscopique de basse energie

    Energy Technology Data Exchange (ETDEWEB)

    Verellen, D.; Soete, G.; Linthout, N.; Tournel, K.; Storme, G. [Vrije Universiteit Brussel (AZ-VUB), Dept. of Radiotherapy, Oncology Center, Academic Hospital, Brussels (Belgium)

    2006-09-15

    In this paper the clinical introduction of stereoscopic kV-imaging in combination with a 6 degrees-of-freedom (6 DOF) robotics system and breathing synchronized irradiation will be discussed in view of optimally reducing inter-fractional as well as intra-fractional geometric uncertainties in conformal radiation therapy. Extracranial cases represent approximately 70% of the patient population on the NOVALIS treatment machine (BrainLAB A.G., Germany) at the AZ-VUB, which is largely due to the efficiency of the real-time positioning features of the kV-imaging system. The prostate case will be used as an example of those target volumes showing considerable changes in position from day-to-day, yet with negligible motion during the actual course of the treatment. As such it will be used to illustrate the on-line target localization using kV-imaging and 6 DOF patient adjustment with and without implanted radio-opaque markers prior to treatment. Small lung lesion will be used to illustrate the system's potential to synchronize the irradiation with breathing in coping with intra-fractional organ motion. (authors)

  20. Measurements of turbulent premixed flame dynamics using cinema stereoscopic PIV

    Energy Technology Data Exchange (ETDEWEB)

    Steinberg, Adam M.; Driscoll, James F. [University of Michigan, Department of Aerospace Engineering, Ann Arbor, MI (United States); Ceccio, Steven L. [University of Michigan, Department of Mechanical Engineering, Ann Arbor, MI (United States)

    2008-06-15

    A new experimental method is described that provides high-speed movies of turbulent premixed flame wrinkling dynamics and the associated vorticity fields. This method employs cinema stereoscopic particle image velocimetry and has been applied to a turbulent slot Bunsen flame. Three-component velocity fields were measured with high temporal and spatial resolutions of 0.9 ms and 140{mu}m, respectively. The flame-front location was determined using a new multi-step method based on particle image gradients, which is described. Comparisons are made between flame fronts found with this method and simultaneous CH-PLIF images. These show that the flame contour determined corresponds well to the true location of maximum gas density gradient. Time histories of typical eddy-flame interactions are reported and several important phenomena identified. Outwardly rotating eddy pairs wrinkle the flame and are attenuated at they pass through the flamelet. Significant flame-generated vorticity is produced downstream of the wrinkled tip. Similar wrinkles are caused by larger groups of outwardly rotating eddies. Inwardly rotating pairs cause significant convex wrinkles that grow as the flame propagates. These wrinkles encounter other eddies that alter their behavior. The effects of the hydrodynamic and diffusive instabilities are observed and found to be significant contributors to the formation and propagation of wrinkles. (orig.)

  1. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    Science.gov (United States)

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  2. Phase-only stereoscopic hologram calculation based on Gerchberg–Saxton iterative algorithm

    International Nuclear Information System (INIS)

    Xia Xinyi; Xia Jun

    2016-01-01

    A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg–Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. (special topic)

  3. Case study: the introduction of stereoscopic games on the Sony PlayStation 3

    Science.gov (United States)

    Bickerstaff, Ian

    2012-03-01

    A free stereoscopic firmware update on Sony Computer Entertainment's PlayStation® 3 console provides the potential to increase enormously the popularity of stereoscopic 3D in the home. For this to succeed though, a large selection of content has to become available that exploits 3D in the best way possible. In addition to the existing challenges found in creating 3D movies and television programmes, the stereography must compensate for the dynamic and unpredictable environments found in games. Automatically, the software must map the depth range of the scene into the display's comfort zone, while minimising depth compression. This paper presents a range of techniques developed to solve this problem and the challenge of creating twice as many images as the 2D version without excessively compromising the frame rate or image quality. At the time of writing, over 80 stereoscopic PlayStation 3 games have been released and notable titles are used as examples to illustrate how the techniques have been adapted for different game genres. Since the firmware's introduction in 2010, the industry has matured with a large number of developers now producing increasingly sophisticated 3D content. New technologies such as viewer head tracking and head-mounted displays should increase the appeal of 3D in the home still further.

  4. Stereoscopic 3D video games and their effects on engagement

    Science.gov (United States)

    Hogue, Andrew; Kapralos, Bill; Zerebecki, Chris; Tawadrous, Mina; Stanfield, Brodie; Hogue, Urszula

    2012-03-01

    With television manufacturers developing low-cost stereoscopic 3D displays, a large number of consumers will undoubtedly have access to 3D-capable televisions at home. The availability of 3D technology places the onus on content creators to develop interesting and engaging content. While the technology of stereoscopic displays and content generation are well understood, there are many questions yet to be answered surrounding its effects on the viewer. Effects of stereoscopic display on passive viewers for film are known, however video games are fundamentally different since the viewer/player is actively (rather than passively) engaged in the content. Questions of how stereoscopic viewing affects interaction mechanics have previously been studied in the context of player performance but very few have attempted to quantify the player experience to determine whether stereoscopic 3D has a positive or negative influence on their overall engagement. In this paper we present a preliminary study of the effects stereoscopic 3D have on player engagement in video games. Participants played a video game in two conditions, traditional 2D and stereoscopic 3D and their engagement was quantified using a previously validated self-reporting tool. The results suggest that S3D has a positive effect on immersion, presence, flow, and absorption.

  5. SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion

    Science.gov (United States)

    von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger

    2015-04-01

    The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to

  6. Aerial vehicles collision avoidance using monocular vision

    Science.gov (United States)

    Balashov, Oleg; Muraviev, Vadim; Strotov, Valery

    2016-10-01

    In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

  7. Stereoscopic Machine-Vision System Using Projected Circles

    Science.gov (United States)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  8. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Yanchao Dong

    2016-07-01

    Full Text Available The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  9. Detection and Tracking Strategies for Autonomous Aerial Refuelling Tasks Based on Monocular Vision

    Directory of Open Access Journals (Sweden)

    Yingjie Yin

    2014-07-01

    Full Text Available Detection and tracking strategies based on monocular vision are proposed for autonomous aerial refuelling tasks. The drogue attached to the fuel tanker aircraft has two important features. The grey values of the drogue's inner part are different from the external umbrella ribs, as shown in the image. The shape of the drogue's inner dark part is nearly circular. According to crucial prior knowledge, the rough and fine positioning algorithms are designed to detect the drogue. Particle filter based on the drogue's shape is proposed to track the drogue. A strategy to switch between detection and tracking is proposed to improve the robustness of the algorithms. The inner dark part of the drogue is segmented precisely in the detecting and tracking process and the segmented circular part can be used to measure its spatial position. The experimental results show that the proposed method has good performance in real-time and satisfied robustness and positioning accuracy.

  10. Real-time photorealistic stereoscopic rendering of fire

    Science.gov (United States)

    Rose, Benjamin M.; McAllister, David F.

    2007-02-01

    We propose a method for real-time photorealistic stereo rendering of the natural phenomenon of fire. Applications include the use of virtual reality in fire fighting, military training, and entertainment. Rendering fire in real-time presents a challenge because of the transparency and non-static fluid-like behavior of fire. It is well known that, in general, methods that are effective for monoscopic rendering are not necessarily easily extended to stereo rendering because monoscopic methods often do not provide the depth information necessary to produce the parallax required for binocular disparity in stereoscopic rendering. We investigate the existing techniques used for monoscopic rendering of fire and discuss their suitability for extension to real-time stereo rendering. Methods include the use of precomputed textures, dynamic generation of textures, and rendering models resulting from the approximation of solutions of fluid dynamics equations through the use of ray-tracing algorithms. We have found that in order to attain real-time frame rates, our method based on billboarding is effective. Slicing is used to simulate depth. Texture mapping or 2D images are mapped onto polygons and alpha blending is used to treat transparency. We can use video recordings or prerendered high-quality images of fire as textures to attain photorealistic stereo.

  11. A novel visual-inertial monocular SLAM

    Science.gov (United States)

    Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo

    2018-02-01

    With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.

  12. The right view from the wrong location: depth perception in stereoscopic multi-user virtual environments.

    Science.gov (United States)

    Pollock, Brice; Burton, Melissa; Kelly, Jonathan W; Gilbert, Stephen; Winer, Eliot

    2012-04-01

    Stereoscopic depth cues improve depth perception and increase immersion within virtual environments (VEs). However, improper display of these cues can distort perceived distances and directions. Consider a multi-user VE, where all users view identical stereoscopic images regardless of physical location. In this scenario, cues are typically customized for one "leader" equipped with a head-tracking device. This user stands at the center of projection (CoP) and all other users ("followers") view the scene from other locations and receive improper depth cues. This paper examines perceived depth distortion when viewing stereoscopic VEs from follower perspectives and the impact of these distortions on collaborative spatial judgments. Pairs of participants made collaborative depth judgments of virtual shapes viewed from the CoP or after displacement forward or backward. Forward and backward displacement caused perceived depth compression and expansion, respectively, with greater compression than expansion. Furthermore, distortion was less than predicted by a ray-intersection model of stereo geometry. Collaboration times were significantly longer when participants stood at different locations compared to the same location, and increased with greater perceived depth discrepancy between the two viewing locations. These findings advance our understanding of spatial distortions in multi-user VEs, and suggest a strategy for reducing distortion.

  13. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Kuo-Lung Huang

    2015-07-01

    Full Text Available The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  14. Psychometric Assessment of Stereoscopic Head-Mounted Displays

    Science.gov (United States)

    2016-06-29

    Journal Article 3. DATES COVERED (From – To) Jan 2015 - Dec 2015 4. TITLE AND SUBTITLE PSYCHOMETRIC ASSESSMENT OF STEREOSCOPIC HEAD- MOUNTED DISPLAYS...to render an immersive three-dimensional constructive environment. The purpose of this effort was to quantify the impact of aircrew vision on an...simulated tasks requiring precise depth discrimination. This work will provide an example validation method for future stereoscopic virtual immersive

  15. Interactive floating windows: a new technique for stereoscopic video games

    Science.gov (United States)

    Zerebecki, Chris; Stanfield, Brodie; Tawadrous, Mina; Buckstein, Daniel; Hogue, Andrew; Kapralos, Bill

    2012-03-01

    The film industry has a long history of creating compelling experiences in stereoscopic 3D. Recently, the video game as an artistic medium has matured into an effective way to tell engaging and immersive stories. Given the current push to bring stereoscopic 3D technology into the consumer market there is considerable interest to develop stereoscopic 3D video games. Game developers have largely ignored the need to design their games specifically for stereoscopic 3D and have thus relied on automatic conversion and driver technology. Game developers need to evaluate solutions used in other media, such as film, to correct perceptual problems such as window violations, and modify or create new solutions to work within an interactive framework. In this paper we extend the dynamic floating window technique into the interactive domain enabling the player to position a virtual window in space. Interactively changing the position, size, and the 3D rotation of the virtual window, objects can be made to 'break the mask' dramatically enhancing the stereoscopic effect. By demonstrating that solutions from the film industry can be extended into the interactive space, it is our hope that this initiates further discussion in the game development community to strengthen their story-telling mechanisms in stereoscopic 3D games.

  16. Decrease in monocular sleep after sleep deprivation in the domestic chicken

    NARCIS (Netherlands)

    Boerema, AS; Riedstra, B; Strijkstra, AM

    2003-01-01

    We investigated the trade-off between sleep need and alertness, by challenging chickens to modify their monocular sleep. We sleep deprived domestic chickens (Gallus domesticus) to increase their sleep need. We found that in response to sleep deprivation the fraction of monocular sleep within sleep

  17. Action Control: Independent Effects of Memory and Monocular Viewing on Reaching Accuracy

    Science.gov (United States)

    Westwood, D.A.; Robertson, C.; Heath, M.

    2005-01-01

    Evidence suggests that perceptual networks in the ventral visual pathway are necessary for action control when targets are viewed with only one eye, or when the target must be stored in memory. We tested whether memory-linked (i.e., open-loop versus memory-guided actions) and monocular-linked effects (i.e., binocular versus monocular actions) on…

  18. Measurement of compressed breast thickness by optical stereoscopic photogrammetry.

    Science.gov (United States)

    Tyson, Albert H; Mawdsley, Gordon E; Yaffe, Martin J

    2009-02-01

    The determination of volumetric breast density (VBD) from mammograms requires accurate knowledge of the thickness of the compressed breast. In attempting to accurately determine VBD from images obtained on conventional mammography systems, the authors found that the thickness reported by a number of mammography systems in the field varied by as much as 15 mm when compressing the same breast or phantom. In order to evaluate the behavior of mammographic compression systems and to be able to predict the thickness at different locations in the breast on patients, they have developed a method for measuring the local thickness of the breast at all points of contact with the compression paddle using optical stereoscopic photogrammetry. On both flat (solid) and compressible phantoms, the measurements were accurate to better than 1 mm with a precision of 0.2 mm. In a pilot study, this method was used to measure thickness on 108 volunteers who were undergoing mammography examination. This measurement tool will allow us to characterize paddle surface deformations, deflections and calibration offsets for mammographic units.

  19. A Novel Metric Online Monocular SLAM Approach for Indoor Applications

    Directory of Open Access Journals (Sweden)

    Yongfei Li

    2016-01-01

    Full Text Available Monocular SLAM has attracted more attention recently due to its flexibility and being economic. In this paper, a novel metric online direct monocular SLAM approach is proposed, which can obtain the metric reconstruction of the scene. In the proposed approach, a chessboard is utilized to provide initial depth map and scale correction information during the SLAM process. The involved chessboard provides the absolute scale of scene, and it is seen as a bridge between the camera visual coordinate and the world coordinate. The scene is reconstructed as a series of key frames with their poses and correlative semidense depth maps, using a highly accurate pose estimation achieved by direct grid point-based alignment. The estimated pose is coupled with depth map estimation calculated by filtering over a large number of pixelwise small-baseline stereo comparisons. In addition, this paper formulates the scale-drift model among key frames and the calibration chessboard is used to correct the accumulated pose error. At the end of this paper, several indoor experiments are conducted. The results suggest that the proposed approach is able to achieve higher reconstruction accuracy when compared with the traditional LSD-SLAM approach. And the approach can also run in real time on a commonly used computer.

  20. Anisometropia and ptosis in patients with monocular elevation deficiency

    International Nuclear Information System (INIS)

    Zafar, S.N.; Islam, F.; Khan, A.M.

    2016-01-01

    Objective: To determine the effect of ptosis on the refractive error in eyes having monocular elevation deficiency Place and Duration of Study: Al-Shifa Trust Eye Hospital, Rawalpindi, from January 2011 to January 2014. Methodology: Visual acuity, refraction, orthoptic assessment and ptosis evaluation of all patients having monocular elevation deficiency (MED) were recorded. Shapiro-Wilk test was used for tests of normality. Median and interquartile range (IQR) was calculated for the data. Non-parametric variables were compared, using the Wilcoxon signed ranks test. P-values of <0.05 were considered significant. Results: A total of of 41 MED patients were assessed during the study period. Best corrected visual acuity (BCVA) and refractive error was compared between the eyes having MED and the unaffected eyes of the same patient. The refractive status of patients having ptosis with MED were also compared with those having MED without ptosis. Astigmatic correction and vision had significant difference between both the eyes of the patients. Vision was significantly different between the two eyes of patients in both the groups having either presence or absence of ptosis (p=0.04 and p < 0.001, respectively). Conclusion: Significant difference in vision and anisoastigmatism was noted between the two eyes of patients with MED in this study. The presence or absence of ptosis affected the vision but did not have a significant effect on the spherical equivalent (SE) and astigmatic correction between both the eyes. (author)

  1. What is stereoscopic vision good for?

    Science.gov (United States)

    Read, Jenny C. A.

    2015-03-01

    Stereo vision is a resource-intensive process. Nevertheless, it has evolved in many animals including mammals, birds, amphibians and insects. It must therefore convey significant fitness benefits. It is often assumed that the main benefit is improved accuracy of depth judgments, but camouflage breaking may be as important, particularly in predatory animals. In humans, for the last 150 years, stereo vision has been turned to a new use: helping us reproduce visual reality for artistic purposes. By recreating the different views of a scene seen by the two eyes, stereo achieves unprecedented levels of realism. However, it also has some unexpected effects on viewer experience. The disruption of established mechanisms for interpreting pictures may be one reason why some viewers find stereoscopic content disturbing. Stereo vision also has uses in ophthalmology. Clinical stereoacuity tests are used in the management of conditions such as strabismus and amblyopia as well as vision screening. Stereoacuity can reveal the effectiveness of therapy and even predict long-term outcomes post surgery. Yet current clinical stereo tests fall far short of the accuracy and precision achievable in the lab. At Newcastle University, we are exploiting the recent availability of autostereo 3D tablet computers to design a clinical stereotest app in the form of a game suitable for young children. Our goal is to enable quick, accurate and precise stereoacuity measures which will enable clinicians to obtain better outcomes for children with visual disorders.

  2. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    Directory of Open Access Journals (Sweden)

    Dai Qionghai

    2010-01-01

    Full Text Available We propose a Stereoscopic Visual Attention- (SVA- based regional bit allocation optimization for Multiview Video Coding (MVC by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over % bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by  dB at the cost of insensitive image quality degradation of the background image.

  3. Optoelectronic stereoscopic device for diagnostics, treatment, and developing of binocular vision

    Science.gov (United States)

    Pautova, Larisa; Elkhov, Victor A.; Ovechkis, Yuri N.

    2003-08-01

    Operation of the device is based on alternative generation of pictures for left and right eyes on the monitor screen. Controller gives pulses on LCG so that shutter for left or right eye opens synchronously with pictures. The device provides frequency of switching more than 100 Hz, and that is why the flickering is absent. Thus, a separate demonstration of images to the left eye or to the right one in turn is obtained for patients being unaware and creates the conditions of binocular perception clsoe to natural ones without any additional separation of vision fields. LC-cell transfer characteristic coodination with time parameters of monitor screen has enabled to improve stereo image quality. Complicated problem of computer stereo images with LC-glasses is so called 'ghosts' - noise images that come to blocked eye. We reduced its influence by adapting stereo images to phosphor and LC-cells characteristics. The device is intended for diagnostics and treatment of stabismus, amblyopia and other binocular and stereoscopic vision impairments, for cultivating, training and developing of stereoscopic vision, for measurements of horizontal and vertical phoria, phusion reserves, the stereovision acuity and some else, for fixing central scotoma borders, as well as suppression scotoma in strabismus too.

  4. A 3-D mixed-reality system for stereoscopic visualization of medical dataset.

    Science.gov (United States)

    Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco

    2009-11-01

    We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.

  5. Using mental rotation to evaluate the benefits of stereoscopic displays

    Science.gov (United States)

    Aitsiselmi, Y.; Holliman, N. S.

    2009-02-01

    Context: The idea behind stereoscopic displays is to create the illusion of depth and this concept could have many practical applications. A common spatial ability test involves mental rotation. Therefore a mental rotation task should be easier if being undertaken on a stereoscopic screen. Aim: The aim of this project is to evaluate stereoscopic displays (3D screen) and to assess whether they are better for performing a certain task than over a 2D display. A secondary aim was to perform a similar study but replicating the conditions of using a stereoscopic mobile phone screen. Method: We devised a spatial ability test involving a mental rotation task that participants were asked to complete on either a 3D or 2D screen. We also design a similar task to simulate the experience on a stereoscopic cell phone. The participants' error rate and response times were recorded. Using statistical analysis, we then compared the error rate and response times of the groups to see if there were any significant differences. Results: We found that the participants got better scores if they were doing the task on a stereoscopic screen as opposed to a 2D screen. However there was no statistically significant difference in the time it took them to complete the task. We also found similar results for 3D cell phone display condition. Conclusions: The results show that the extra depth information given by a stereoscopic display makes it easier to mentally rotate a shape as depth cues are readily available. These results could have many useful implications to certain industries.

  6. Passive method of eliminating accommodation/convergence disparity in stereoscopic head-mounted displays

    Science.gov (United States)

    Eichenlaub, Jesse B.

    2005-03-01

    The difference in accommodation and convergence distance experienced when viewing stereoscopic displays has long been recognized as a source of visual discomfort. It is especially problematic in head mounted virtual reality and enhanced reality displays, where images must often be displayed across a large depth range or superimposed on real objects. DTI has demonstrated a novel method of creating stereoscopic images in which the focus and fixation distances are closely matched for all parts of the scene from close distances to infinity. The method is passive in the sense that it does not rely on eye tracking, moving parts, variable focus optics, vibrating optics, or feedback loops. The method uses a rapidly changing illumination pattern in combination with a high speed microdisplay to create cones of light that converge at different distances to form the voxels of a high resolution space filling image. A bench model display was built and a series of visual tests were performed in order to demonstrate the concept and investigate both its capabilities and limitations. Results proved conclusively that real optical images were being formed and that observers had to change their focus to read text or see objects at different distances

  7. Monocular display unit for 3D display with correct depth perception

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  8. Monocular Visual Deprivation Suppresses Excitability in Adult Human Visual Cortex

    DEFF Research Database (Denmark)

    Lou, Astrid Rosenstand; Madsen, Kristoffer Hougaard; Paulson, Olaf Bjarne

    2011-01-01

    The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... of visual deprivation has a substantial impact on experience-dependent plasticity of the human visual cortex.......The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... employed TMS to trace plastic changes in adult visual cortex before, during, and after 48 h of monocular deprivation (MD) of the right dominant eye. In healthy adult volunteers, MD-induced changes in visual cortex excitability were probed with paired-pulse TMS applied to the left and right occipital cortex...

  9. A low cost PSD-based monocular motion capture system

    Science.gov (United States)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  10. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    Science.gov (United States)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  11. Monocular oral reading after treatment of dense congenital unilateral cataract

    Science.gov (United States)

    Birch, Eileen E.; Cheng, Christina; Christina, V; Stager, David R.

    2010-01-01

    Background Good long-term visual acuity outcomes for children with dense congenital unilateral cataracts have been reported following early surgery and good compliance with postoperative amblyopia therapy. However, treated eyes rarely achieve normal visual acuity and there has been no formal evaluation of the utility of the treated eye for reading. Methods Eighteen children previously treated for dense congenital unilateral cataract were tested monocularly with the Gray Oral Reading Test, 4th edition (GORT-4) at 7 to 13 years of age using two passages for each eye, one at grade level and one at +1 above grade level. In addition, right eyes of 55 normal children age 7 to 13 served as a control group. The GORT-4 assesses reading rate, accuracy, fluency, and comprehension. Results Visual acuity of treated eyes ranged from 0.1 to 2.0 logMAR and of fellow eyes from −0.1 to 0.2 logMAR. Treated eyes scored significantly lower than fellow and normal control eyes on all scales at grade level and at +1 above grade level. Monocular reading rate, accuracy, fluency, and comprehension were correlated with visual acuity of treated eyes (rs = −0.575 to −0.875, p < 0.005). Treated eyes with 0.1-0.3 logMAR visual acuity did not differ from fellow or normal control eyes in rate, accuracy, fluency, or comprehension when reading at grade level or at +1 above grade level. Fellow eyes did not differ from normal controls on any reading scale. Conclusions Excellent visual acuity outcomes following treatment of dense congenital unilateral cataracts are associated with normal reading ability of the treated eye in school-age children. PMID:20603057

  12. Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera

    National Research Council Canada - National Science Library

    Chen, J; Dixon, W. E; Dawson, D. M; Chitrakaran, V. K

    2004-01-01

    In this paper, a visual servo tracking controller for a wheeled mobile robot (WMR) is developed that utilizes feedback from a monocular camera system that is mounted with a fixed position and orientation...

  13. Ergonomic evaluation of ubiquitous computing with monocular head-mounted display

    Science.gov (United States)

    Kawai, Takashi; Häkkinen, Jukka; Yamazoe, Takashi; Saito, Hiroko; Kishi, Shinsuke; Morikawa, Hiroyuki; Mustonen, Terhi; Kaistinen, Jyrki; Nyman, Göte

    2010-01-01

    In this paper, the authors conducted an experiment to evaluate the UX in an actual outdoor environment, assuming the casual use of monocular HMD to view video content while short walking. In conducting the experiment, eight subjects were asked to view news videos on a monocular HMD while walking through a large shopping mall. Two types of monocular HMDs and a hand-held media player were used, and the psycho-physiological responses of the subjects were measured before, during, and after the experiment. The VSQ, SSQ and NASA-TLX were used to assess the subjective workloads and symptoms. The objective indexes were heart rate and stride and a video recording of the environment in front of the subject's face. The results revealed differences between the two types of monocular HMDs as well as between the monocular HMDs and other conditions. Differences between the types of monocular HMDs may have been due to screen vibration during walking, and it was considered as a major factor in the UX in terms of the workload. Future experiments to be conducted in other locations will have higher cognitive loads in order to study the performance and the situation awareness to actual and media environments.

  14. A Monocular Vision Measurement System of Three-Degree-of-Freedom Air-Bearing Test-Bed Based on FCCSP

    Science.gov (United States)

    Gao, Zhanyu; Gu, Yingying; Lv, Yaoyu; Xu, Zhenbang; Wu, Qingwen

    2018-06-01

    A monocular vision-based pose measurement system is provided for real-time measurement of a three-degree-of-freedom (3-DOF) air-bearing test-bed. Firstly, a circular plane cooperative target is designed. An image of a target fixed on the test-bed is then acquired. Blob analysis-based image processing is used to detect the object circles on the target. A fast algorithm (FCCSP) based on pixel statistics is proposed to extract the centers of object circles. Finally, pose measurements can be obtained when combined with the centers and the coordinate transformation relation. Experiments show that the proposed method is fast, accurate, and robust enough to satisfy the requirement of the pose measurement.

  15. A system and method for adjusting and presenting stereoscopic content

    DEFF Research Database (Denmark)

    2013-01-01

    on the basis of one or more vision specific parameters (0M, ThetaMuAlphaChi, ThetaMuIotaNu, DeltaTheta) indicating abnormal vision for the user. In this way, presenting stereoscopic content is enabled that is adjusted specifically to the given person. This may e.g. be used for training purposes or for improved...

  16. 3D Stereoscopic Visualization of Fenestrated Stent Grafts

    International Nuclear Information System (INIS)

    Sun Zhonghua; Squelch, Andrew; Bartlett, Andrew; Cunningham, Kylie; Lawrence-Brown, Michael

    2009-01-01

    The purpose of this study was to present a technique of stereoscopic visualization in the evaluation of patients with abdominal aortic aneurysm treated with fenestrated stent grafts compared with conventional 2D visualizations. Two patients with abdominal aortic aneurysm undergoing fenestrated stent grafting were selected for inclusion in the study. Conventional 2D views including axial, multiplanar reformation, maximum-intensity projection, and volume rendering and 3D stereoscopic visualizations were assessed by two experienced reviewers independently with regard to the treatment outcomes of fenestrated repair. Interobserver agreement was assessed with Kendall's W statistic. Multiplanar reformation and maximum-intensity projection visualizations were scored the highest in the evaluation of parameters related to the fenestrated stent grafting, while 3D stereoscopic visualization was scored as valuable in the evaluation of appearance (any distortions) of the fenestrated stent. Volume rendering was found to play a limited role in the follow-up of fenestrated stent grafting. 3D stereoscopic visualization adds additional information that assists endovascular specialists to identify any distortions of the fenestrated stents when compared with 2D visualizations.

  17. Size Optimization of 3D Stereoscopic Film Frames

    African Journals Online (AJOL)

    pc

    2018-03-22

    Mar 22, 2018 ... perception. Keywords- Optimization; Stereoscopic Film; 3D Frames;Aspect. Ratio ... television will mature to enable the viewing of 3D films prevalent[3]. On the .... Industry Standard VFX Practices and Proced. 2014. [10] N. A. ...

  18. SLAMM: Visual monocular SLAM with continuous mapping using multiple maps.

    Directory of Open Access Journals (Sweden)

    Hayyan Afeef Daoud

    Full Text Available This paper presents the concept of Simultaneous Localization and Multi-Mapping (SLAMM. It is a system that ensures continuous mapping and information preservation despite failures in tracking due to corrupted frames or sensor's malfunction; making it suitable for real-world applications. It works with single or multiple robots. In a single robot scenario the algorithm generates a new map at the time of tracking failure, and later it merges maps at the event of loop closure. Similarly, maps generated from multiple robots are merged without prior knowledge of their relative poses; which makes this algorithm flexible. The system works in real time at frame-rate speed. The proposed approach was tested on the KITTI and TUM RGB-D public datasets and it showed superior results compared to the state-of-the-arts in calibrated visual monocular keyframe-based SLAM. The mean tracking time is around 22 milliseconds. The initialization is twice as fast as it is in ORB-SLAM, and the retrieved map can reach up to 90 percent more in terms of information preservation depending on tracking loss and loop closure events. For the benefit of the community, the source code along with a framework to be run with Bebop drone are made available at https://github.com/hdaoud/ORBSLAMM.

  19. Chromatic and achromatic monocular deprivation produce separable changes of eye dominance in adults.

    Science.gov (United States)

    Zhou, Jiawei; Reynaud, Alexandre; Kim, Yeon Jin; Mullen, Kathy T; Hess, Robert F

    2017-11-29

    Temporarily depriving one eye of its input, in whole or in part, results in a transient shift in eye dominance in human adults, with the patched eye becoming stronger and the unpatched eye weaker. However, little is known about the role of colour contrast in these behavioural changes. Here, we first show that the changes in eye dominance and contrast sensitivity induced by monocular eye patching affect colour and achromatic contrast sensitivity equally. We next use dichoptic movies, customized and filtered to stimulate the two eyes differentially. We show that a strong imbalance in achromatic contrast between the eyes, with no colour content, also produces similar, unselective shifts in eye dominance for both colour and achromatic contrast sensitivity. Interestingly, if this achromatic imbalance is paired with similar colour contrast in both eyes, the shift in eye dominance is selective, affecting achromatic but not chromatic contrast sensitivity and revealing a dissociation in eye dominance for colour and achromatic image content. On the other hand, a strong imbalance in chromatic contrast between the eyes, with no achromatic content, produces small, unselective changes in eye dominance, but if paired with similar achromatic contrast in both eyes, no changes occur. We conclude that perceptual changes in eye dominance are strongly driven by interocular imbalances in achromatic contrast, with colour contrast having a significant counter balancing effect. In the short term, eyes can have different dominances for achromatic and chromatic contrast, suggesting separate pathways at the site of these neuroplastic changes. © 2017 The Author(s).

  20. Real-Time Vehicle Speed Estimation Based on License Plate Tracking in Monocular Video Sequences

    Directory of Open Access Journals (Sweden)

    Aleksej MAKAROV

    2016-02-01

    Full Text Available A method of estimating the vehicle speed from images obtained by a fixed over-the-road monocular camera is presented. The method is based on detecting and tracking vehicle license plates. The contrast between the license plate and its surroundings is enhanced using infrared light emitting diodes and infrared camera filters. A range of the license plate height values is assumed a priori. The camera vertical angle of view is measured prior to installation. The camera tilt is continuously measured by a micro-electromechanical sensor. The distance of the license plate from the camera is theoretically derived in terms of its pixel coordinates. Inaccuracies due to the frame rate drift, to the tilt and the angle of view measurement errors, to edge pixel detection and to a coarse assumption of the vehicle license plate height are analyzed and theoretically formulated. The resulting system is computationally efficient, inexpensive and easy to install and maintain along with the existing ALPR cameras.

  1. Breathing-synchronized irradiation using stereoscopic kV-imaging to limit influence of interplay between leaf motion and organ motion in 3D-CRT and IMRT: Dosimetric verification and first clinical experience

    International Nuclear Information System (INIS)

    Verellen, Dirk; Tournel, Koen; Steene, Jan van de; Linthout, Nadine; Wauters, Tom; Vinh-Hung, Vincent; Storme, Guy

    2006-01-01

    Purpose: To verify the technical feasibility of a prototype developed for breathing-synchronized irradiation by phantom measurement and report on the first clinical experience of 3 patients. Methods and Materials: Adaptations to a commercially available image-guidance technique (Novalis Body/ExacTrac4.0; BrainLAB AG, Heimstetten, Germany) were implemented, allowing breathing-synchronized irradiation with the Novalis system. A simple phantom simulating a breathing pattern of 16 cycles per minute and covering a distance of 4 cm was introduced to assess the system's performance to: (1) trigger the linac at the right moment (using a hidden target in the form of a 3-mm metal beads mounted to the phantom); (2) assess the delivered dose in nongated and gated mode (using an ionization chamber mounted to the phantom); (3) evaluate dose blurring and interplay between organ motion and leaf motion when applying dynamic multileaf collimation (DMLC) intensity-modulated radiation therapy (IMRT) techniques (using radiographic film mounted to the phantom). The effect of motion was evaluated by importing the measured fluence maps generated by the linac into the treatment planning system and recalculating the resulting dose distribution from DMLC IMRT fluence patterns acquired in nongated and gated mode. The synchronized-breathing technique was applied to three clinical cases: one liver metastasis, one lung metastasis, and one primary lung tumor. Results: No measurable delay in the triggering of the linac can be observed based on the hidden target test. The ionization chamber measurements showed that the system is able to improve the dose absorption from 44% (in nongated mode) to 98% (in gated mode) for a small field irradiation (3 x 3 cm 2 ) of a moving target. Importing measured fluence maps generated for a realistic patient treatment and actually delivered by the linac into the treatment planning system yielded highly disturbed dose distributions in nongated delivery, whereas the

  2. Stereoscopy in diagnostic radiology and procedure planning: does stereoscopic assessment of volume-rendered CT angiograms lead to more accurate characterisation of cerebral aneurysms compared with traditional monoscopic viewing?

    International Nuclear Information System (INIS)

    Stewart, Nikolas; Lock, Gregory; Coucher, John; Hopcraft, Anthony

    2014-01-01

    Stereoscopic vision is a critical part of the human visual system, conveying more information than two-dimensional, monoscopic observation alone. This study aimed to quantify the contribution of stereoscopy in assessment of radiographic data, using widely available three-dimensional (3D)-capable display monitors by assessing whether stereoscopic viewing improved the characterisation of cerebral aneurysms. Nine radiology registrars were shown 40 different volume-rendered (VR) models of cerebral computed tomography angiograms (CTAs), each in both monoscopic and stereoscopic format and then asked to record aneurysm characteristics on short multiple-choice answer sheets. The monitor used was a current model commercially available 3D television. Responses were marked against a gold standard of assessments made by a consultant radiologist, using the original CT planar images on a diagnostic radiology computer workstation. The participants' results were fairly homogenous, with most showing no difference in diagnosis using stereoscopic VR models. One participant performed better on the monoscopic VR models. On average, monoscopic VRs achieved a slightly better diagnosis by 2.0%. Stereoscopy has a long history, but it has only recently become technically feasible for stored cross-sectional data to be adequately reformatted and displayed in this format. Scant literature exists to quantify the technology's possible contribution to medical imaging - this study attempts to build on this limited knowledge base and promote discussion within the field. Stereoscopic viewing of images should be further investigated and may well eventually find a permanent place in procedural and diagnostic medical imaging.

  3. Calculation of 3D Coordinates of a Point on the Basis of a Stereoscopic System

    Science.gov (United States)

    Mussabayev, R. R.; Kalimoldayev, M. N.; Amirgaliyev, Ye. N.; Tairova, A. T.; Mussabayev, T. R.

    2018-05-01

    The solution of three-dimensional (3D) coordinate calculation task for a material point is considered. Two flat images (a stereopair) which correspond to the left and to the right viewpoints of a 3D scene are used for this purpose. The stereopair is obtained using two cameras with parallel optical axes. The analytical formulas for calculating 3D coordinates of a material point in the scene were obtained on the basis of analysis of the stereoscopic system optical and geometrical schemes. The detailed presentation of the algorithmic and hardware realization of the given method was discussed with the the practical. The practical module was recommended for the determination of the optical system unknown parameters. The series of experimental investigations were conducted for verification of theoretical results. During these experiments the minor inaccuracies were occurred by space distortions in the optical system and by it discrecity. While using the high quality stereoscopic system, the existing calculation inaccuracy enables to apply the given method for the wide range of practical tasks.

  4. Stereoscopic Three-Dimensional Neuroanatomy Lectures Enhance Neurosurgical Training: Prospective Comparison with Traditional Teaching.

    Science.gov (United States)

    Clark, Anna D; Guilfoyle, Mathew R; Candy, Nicholas G; Budohoski, Karol P; Hofmann, Riikka; Barone, Damiano G; Santarius, Thomas; Kirollos, Ramez W; Trivedi, Rikin A

    2017-12-01

    Stereoscopic three-dimensional (3D) imaging is increasingly used in the teaching of neuroanatomy and although this is mainly aimed at undergraduate medical students, it has enormous potential for enhancing the training of neurosurgeons. This study aims to assess whether 3D lecturing is an effective method of enhancing the knowledge and confidence of neurosurgeons and how it compares with traditional two-dimensional (2D) lecturing and cadaveric training. Three separate teaching sessions for neurosurgical trainees were organized: 1) 2D course (2D lecture + cadaveric session), 2) 3D lecture alone, and 3) 3D course (3D lecture + cadaveric session). Before and after each session, delegates were asked to complete questionnaires containing questions relating to surgical experience, anatomic knowledge, confidence in performing procedures, and perceived value of 3D, 2D, and cadaveric teaching. Although both 2D and 3D lectures and courses were similarly effective at improving self-rated knowledge and understanding, the 3D lecture and course were associated with significantly greater gains in confidence reported by the delegates for performing a subfrontal approach and sylvian fissure dissection. Stereoscopic 3D lectures provide neurosurgical trainees with greater confidence for performing standard operative approaches and enhances the benefit of subsequent practical experience in developing technical skills in cadaveric dissection. Copyright © 2017. Published by Elsevier Inc.

  5. Figure and ground in the visual cortex: v2 combines stereoscopic cues with gestalt rules.

    Science.gov (United States)

    Qiu, Fangtu T; von der Heydt, Rüdiger

    2005-07-07

    Figure-ground organization is a process by which the visual system identifies some image regions as foreground and others as background, inferring 3D layout from 2D displays. A recent study reported that edge responses of neurons in area V2 are selective for side-of-figure, suggesting that figure-ground organization is encoded in the contour signals (border ownership coding). Here, we show that area V2 combines two strategies of computation, one that exploits binocular stereoscopic information for the definition of local depth order, and another that exploits the global configuration of contours (Gestalt factors). These are combined in single neurons so that the "near" side of the preferred 3D edge generally coincides with the preferred side-of-figure in 2D displays. Thus, area V2 represents the borders of 2D figures as edges of surfaces, as if the figures were objects in 3D space. Even in 3D displays, Gestalt factors influence the responses and can enhance or null the stereoscopic depth information.

  6. Dichoptic training in adults with amblyopia: Additional stereoacuity gains over monocular training.

    Science.gov (United States)

    Liu, Xiang-Yun; Zhang, Jun-Yun

    2017-08-04

    Dichoptic training is a recent focus of research on perceptual learning in adults with amblyopia, but whether and how dichoptic training is superior to traditional monocular training is unclear. Here we investigated whether dichoptic training could further boost visual acuity and stereoacuity in monocularly well-trained adult amblyopic participants. During dichoptic training the participants used the amblyopic eye to practice a contrast discrimination task, while a band-filtered noise masker was simultaneously presented in the non-amblyopic fellow eye. Dichoptic learning was indexed by the increase of maximal tolerable noise contrast for successful contrast discrimination in the amblyopic eye. The results showed that practice tripled maximal tolerable noise contrast in 13 monocularly well-trained amblyopic participants. Moreover, the training further improved stereoacuity by 27% beyond the 55% gain from previous monocular training, but unchanged visual acuity of the amblyopic eyes. Therefore our dichoptic training method may produce extra gains of stereoacuity, but not visual acuity, in adults with amblyopia after monocular training. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Dream Home: a multiview stereoscopic interior design system

    Science.gov (United States)

    Hsiao, Fu-Jen; Teng, Chih-Jen; Lin, Chung-Wei; Luo, An-Chun; Yang, Jinn-Cherng

    2010-01-01

    In this paper, a novel multi-view stereoscopic interior design system, "Dream Home", has been developed to bring users new interior design experience. Different than other interior design system before, we put emphasis on its intuitive manipulation and multi-view stereoscopic visualization in real time. Users can do their own interior design just using their hands and eyes without any difficulty. They manipulate furniture cards directly as they wish to setup their living room in the model house task space, get the multi-view 3D visual feedback instantly, and re-adjust cards until they are satisfied. No special skills are required, and you can explore your design talent arbitrarily. We hope that "Dream Home" will make interior design more user-friendly, more intuitive, and more vivid.

  8. Methodology for stereoscopic motion-picture quality assessment

    Science.gov (United States)

    Voronov, Alexander; Vatolin, Dmitriy; Sumin, Denis; Napadovsky, Vyacheslav; Borisov, Alexey

    2013-03-01

    Creating and processing stereoscopic video imposes additional quality requirements related to view synchronization. In this work we propose a set of algorithms for detecting typical stereoscopic-video problems, which appear owing to imprecise setup of capture equipment or incorrect postprocessing. We developed a methodology for analyzing the quality of S3D motion pictures and for revealing their most problematic scenes. We then processed 10 modern stereo films, including Avatar, Resident Evil: Afterlife and Hugo, and analyzed changes in S3D-film quality over the years. This work presents real examples of common artifacts (color and sharpness mismatch, vertical disparity and excessive horizontal disparity) in the motion pictures we processed, as well as possible solutions for each problem. Our results enable improved quality assessment during the filming and postproduction stages.

  9. Current status of stereoscopic 3D LCD TV technologies

    Science.gov (United States)

    Choi, Hee-Jin

    2011-06-01

    The year 2010 may be recorded as a first year of successful commercial 3D products. Among them, the 3D LCD TVs are expected to be the major one regarding the sales volume. In this paper, the principle of current stereoscopic 3D LCD TV techniques and the required flat panel display (FPD) technologies for the realization of them are reviewed. [Figure not available: see fulltext.

  10. Scintigraphic and echographic thyroid image matching by a stereoscopic method

    International Nuclear Information System (INIS)

    Ballet, E.; Rousseau, J.; Marchandise, X.; Cussac, J.F.; Ballet, E.; Vasseur, C.; Gibon, D.

    1997-01-01

    We developed a device which allows us to match echographic data and scintiscanning data in a common 3D reference system. In thyroid exploration, this device completes the nuclear medicine examination by specifying simultaneously volume and echo-structure of the gland. Positions of γ-camera and echograph are determined in a 3D reference system using the stereo-vision principle: two CCD cameras allow locating of both sensors within 1.6 m, and sensors may be moved in a 0.4 m x 0.4 m FOV. Real time computation is reduced by limiting data to be treated to light emitters landmarks mounted on the sensor and used to calculate its position and its orientation. Matching accuracy is better than 0.5 mm for position, and better than 0.35 deg for orientation. Sensor marking average time is lesser than 0.5 s. (authors)

  11. A systematized WYSIWYG pipeline for digital stereoscopic 3D filmmaking

    Science.gov (United States)

    Mueller, Robert; Ward, Chris; Hušák, Michal

    2008-02-01

    Digital tools are transforming stereoscopic 3D content creation and delivery, creating an opportunity for the broad acceptance and success of stereoscopic 3D films. Beginning in late 2005, a series of mostly CGI features has successfully initiated the public to this new generation of highly-comfortable, artifact-free digital 3D. While the response has been decidedly favorable, a lack of high-quality live-action films could hinder long-term success. Liveaction stereoscopic films have historically been more time-consuming, costly, and creatively-limiting than 2D films - thus a need arises for a live-action 3D filmmaking process which minimizes such limitations. A unique 'systematized' what-you-see-is-what-you-get (WYSIWYG) pipeline is described which allows the efficient, intuitive and accurate capture and integration of 3D and 2D elements from multiple shoots and sources - both live-action and CGI. Throughout this pipeline, digital tools utilize a consistent algorithm to provide meaningful and accurate visual depth references with respect to the viewing audience in the target theater environment. This intuitive, visual approach introduces efficiency and creativity to the 3D filmmaking process by eliminating both the need for a 'mathematician mentality' of spreadsheets and calculators, as well as any trial and error guesswork, while enabling the most comfortable, 'pixel-perfect', artifact-free 3D product possible.

  12. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia.

    Science.gov (United States)

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-02-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n = 13) were asked to complete two psychophysical supra-threshold binocular summation tasks: (1) binocular phase combination and (2) dichoptic global motion coherence before and after monocular training to investigate this question. We showed that these participants benefited from monocular training in terms of binocular combination. More importantly, the improvements observed with the area under log CSF (AULCSF) were found to be correlated with the improvements in binocular phase combination.

  13. Stereoscopic virtual reality models for planning tumor resection in the sellar region

    Directory of Open Access Journals (Sweden)

    Wang Shou-sen

    2012-11-01

    Full Text Available Abstract Background It is difficult for neurosurgeons to perceive the complex three-dimensional anatomical relationships in the sellar region. Methods To investigate the value of using a virtual reality system for planning resection of sellar region tumors. The study included 60 patients with sellar tumors. All patients underwent computed tomography angiography, MRI-T1W1, and contrast enhanced MRI-T1W1 image sequence scanning. The CT and MRI scanning data were collected and then imported into a Dextroscope imaging workstation, a virtual reality system that allows structures to be viewed stereoscopically. During preoperative assessment, typical images for each patient were chosen and printed out for use by the surgeons as references during surgery. Results All sellar tumor models clearly displayed bone, the internal carotid artery, circle of Willis and its branches, the optic nerve and chiasm, ventricular system, tumor, brain, soft tissue and adjacent structures. Depending on the location of the tumors, we simulated the transmononasal sphenoid sinus approach, transpterional approach, and other approaches. Eleven surgeons who used virtual reality models completed a survey questionnaire. Nine of the participants said that the virtual reality images were superior to other images but that other images needed to be used in combination with the virtual reality images. Conclusions The three-dimensional virtual reality models were helpful for individualized planning of surgery in the sellar region. Virtual reality appears to be promising as a valuable tool for sellar region surgery in the future.

  14. Preliminary Results for a Monocular Marker-Free Gait Measurement System

    Directory of Open Access Journals (Sweden)

    Jane Courtney

    2006-01-01

    Full Text Available This paper presents results from a novel monocular marker-free gait measurement system. The system was designed for physical and occupational therapists to monitor the progress of patients through therapy. It is based on a novel human motion capturemethod derived from model-based tracking. Testing is performed on two monocular, sagittal-view, sample gait videos – one with both the environment and the subject’s appearance and movement restricted and one in a natural environment with unrestrictedclothing and motion. Results of the modelling, tracking and analysis stages are presented along with standard gait graphs and parameters.

  15. A analysis of differences between common types of 3D stereoscopic movie & TV technology

    Directory of Open Access Journals (Sweden)

    CHEN Shuangyin

    2013-06-01

    Full Text Available 3D stereoscopic movie & TV technology develops rapidly.It is spreading into common people's life day by day.In this thesis,the author analyzes 3D stereoscopic movie & TV technology thoroughly.By comparing and studying the different technical solutions of the stereoscopic photography and video recording,production process and playing back,the author generalizes the characteristics of various programs and analyzes their strength and weakness.Eventually,the thesis gives the specific application of existing technical solutions and the future development.At last,it puts improvement goals of 3D stereoscopic movie & TV technology and gives large future development.

  16. Self-supervised learning as an enabling technology for future space exploration robots: ISS experiments on monocular distance learning

    Science.gov (United States)

    van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario

    2017-11-01

    Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.

  17. Stereoscopic Augmented Reality System for Supervised Training on Minimal Invasive Surgery Robots

    DEFF Research Database (Denmark)

    Matu, Florin-Octavian; Thøgersen, Mikkel; Galsgaard, Bo

    2014-01-01

    the need for efficient training. When training with the robot, the communication between the trainer and the trainee is limited, since the trainee often cannot see the trainer. To overcome this issue, this paper proposes an Augmented Reality (AR) system where the trainer is controlling two virtual robotic...... arms. These arms are virtually superimposed on the video feed to the trainee, and can therefore be used to demonstrate and perform various tasks for the trainee. Furthermore, the trainer is presented with a 3D image through a stereoscopic display. Because of the added depth perception, this enables...... the procedure, and thereby enhances the training experience. The virtual overlay was also found to work as a good and illustrative approach for enhanced communication. However, the delay of the prototype made it difficult to use for actual training....

  18. Pediatric Oculomotor Findings during Monocular Videonystagmography: A Developmental Study.

    Science.gov (United States)

    Doettl, Steven M; Plyler, Patrick N; McCaslin, Devin L; Schay, Nancy L

    2015-09-01

    The differential diagnosis of a dizzy patient >4 yrs old is often aided by videonystagmography (VNG) testing to provide a global assessment of peripheral and central vestibular function. Although the value of a VNG evaluation is well-established, it remains unclear if the VNG test battery is as applicable to the pediatric population as it is for adults. Oculomotor testing specifically, as opposed to spontaneous, positional, and caloric testing, is dependent upon neurologic function. Thus, age and corresponding neuromaturation may have a significant effect on oculomotor findings. The purpose of this investigation was to describe the effect of age on various tests of oculomotor function during a monocular VNG examination. Specifically, this study systematically characterized the impact of age on saccade tracking, smooth pursuit tracking, and optokinetic (OPK) nystagmus. The present study used a prospective, repeated measures design. A total of 62 healthy participants were evaluated. Group 1 consisted of 29 4- to 6-yr-olds. Group 2 consisted of 33 21- to 44-yr-olds. Each participant completed a standard VNG oculomotor test battery including saccades, smooth pursuit, and OPK testing in randomized order using a commercially available system. The response metrics saccade latency, accuracy, and speed, smooth pursuit gain, OPK nystagmus gain, speed and asymmetry ratios were collected and analyzed. Significant differences were noted between groups for saccade latency, smooth pursuit gain, and OPK asymmetry ratios. Saccade latency was significantly longer for the pediatric participants compared to the adult participants. Smooth pursuit gain was significantly less for the pediatric participants compared to the adult participants. The pediatric participants also demonstrated increased OPK asymmetry ratios compared to the adult participants. Significant differences were noted between the pediatric and adult participants for saccade latency, smooth pursuit gain, and OPK

  19. Integrating multi-view transmission system into MPEG-21 stereoscopic and multi-view DIA (digital item adaptation)

    Science.gov (United States)

    Lee, Seungwon; Park, Ilkwon; Kim, Manbae; Byun, Hyeran

    2006-10-01

    As digital broadcasting technologies have been rapidly progressed, users' expectations for realistic and interactive broadcasting services also have been increased. As one of such services, 3D multi-view broadcasting has received much attention recently. In general, all the view sequences acquired at the server are transmitted to the client. Then, the user can select a part of views or all the views according to display capabilities. However, this kind of system requires high processing power of the server as well as the client, thus posing a difficulty in practical applications. To overcome this problem, a relatively simple method is to transmit only two view-sequences requested by the client in order to deliver a stereoscopic video. In this system, effective communication between the server and the client is one of important aspects. In this paper, we propose an efficient multi-view system that transmits two view-sequences and their depth maps according to user's request. The view selection process is integrated into MPEG-21 DIA (Digital Item Adaptation) so that our system is compatible to MPEG-21 multimedia framework. DIA is generally composed of resource adaptation and descriptor adaptation. It is one of merits that SVA (stereoscopic video adaptation) descriptors defined in DIA standard are used to deliver users' preferences and device capabilities. Furthermore, multi-view descriptions related to multi-view camera and system are newly introduced. The syntax of the descriptions and their elements is represented in XML (eXtensible Markup Language) schema. If the client requests an adapted descriptor (e.g., view numbers) to the server, then the server sends its associated view sequences. Finally, we present a method which can reduce user's visual discomfort that might occur while viewing stereoscopic video. This phenomenon happens when view changes as well as when a stereoscopic image produces excessive disparity caused by a large baseline between two cameras. To

  20. Distance and velocity estimation using optical flow from a monocular camera

    NARCIS (Netherlands)

    Ho, H.W.; de Croon, G.C.H.E.; Chu, Q.

    2016-01-01

    Monocular vision is increasingly used in Micro Air Vehicles for navigation. In particular, optical flow, inspired by flying insects, is used to perceive vehicles’ movement with respect to the surroundings or sense changes in the environment. However, optical flow does not directly provide us the

  1. Distance and velocity estimation using optical flow from a monocular camera

    NARCIS (Netherlands)

    Ho, H.W.; de Croon, G.C.H.E.; Chu, Q.

    2017-01-01

    Monocular vision is increasingly used in micro air vehicles for navigation. In particular, optical flow, inspired by flying insects, is used to perceive vehicle movement with respect to the surroundings or sense changes in the environment. However, optical flow does not directly provide us the

  2. Three dimensional monocular human motion analysis in end-effector space

    DEFF Research Database (Denmark)

    Hauberg, Søren; Lapuyade, Jerome; Engell-Nørregård, Morten Pol

    2009-01-01

    In this paper, we present a novel approach to three dimensional human motion estimation from monocular video data. We employ a particle filter to perform the motion estimation. The novelty of the method lies in the choice of state space for the particle filter. Using a non-linear inverse kinemati...

  3. Transient monocular blindness and the risk of vascular complications according to subtype : a prospective cohort study

    NARCIS (Netherlands)

    Volkers, Eline J; Donders, Richard C J M; Koudstaal, Peter J; van Gijn, Jan; Algra, Ale; Jaap Kappelle, L

    Patients with transient monocular blindness (TMB) can present with many different symptoms, and diagnosis is usually based on the history alone. In this study, we assessed the risk of vascular complications according to different characteristics of TMB. We prospectively studied 341 consecutive

  4. Transient monocular blindness and the risk of vascular complications according to subtype: a prospective cohort study

    NARCIS (Netherlands)

    Volkers, E.J. (Eline J.); R. Donders (Rogier); P.J. Koudstaal (Peter Jan); van Gijn, J. (Jan); A. Algra (Ale); L. Jaap Kappelle

    2016-01-01

    textabstractPatients with transient monocular blindness (TMB) can present with many different symptoms, and diagnosis is usually based on the history alone. In this study, we assessed the risk of vascular complications according to different characteristics of TMB. We prospectively studied 341

  5. The effects of left and right monocular viewing on hemispheric activation.

    Science.gov (United States)

    Wang, Chao; Burtis, D Brandon; Ding, Mingzhou; Mo, Jue; Williamson, John B; Heilman, Kenneth M

    2018-03-01

    Prior research has revealed that whereas activation of the left hemisphere primarily increases the activity of the parasympathetic division of the autonomic nervous system, right-hemisphere activation increases the activity of the sympathetic division. In addition, each hemisphere primarily receives retinocollicular projections from the contralateral eye. A prior study reported that pupillary dilation was greater with left- than with right-eye monocular viewing. The goal of this study was to test the alternative hypotheses that this asymmetric pupil dilation with left-eye viewing was induced by activation of the right-hemispheric-mediated sympathetic activity, versus a reduction of left-hemisphere-mediated parasympathetic activity. Thus, this study was designed to learn whether there are changes in hemispheric activation, as measured by alteration of spontaneous alpha activity, during right versus left monocular viewing. High-density electroencephalography (EEG) was recorded from healthy participants viewing a crosshair with their right, left, or both eyes. There was a significantly less alpha power over the right hemisphere's parietal-occipital area with left and binocular viewing than with right-eye monocular viewing. The greater relative reduction of right-hemisphere alpha activity during left than during right monocular viewing provides further evidence that left-eye viewing induces greater increase in right-hemisphere activation than does right-eye viewing.

  6. Depth of Monocular Elements in a Binocular Scene: The Conditions for da Vinci Stereopsis

    Science.gov (United States)

    Cook, Michael; Gillam, Barbara

    2004-01-01

    Quantitative depth based on binocular resolution of visibility constraints is demonstrated in a novel stereogram representing an object, visible to 1 eye only, and seen through an aperture or camouflaged against a background. The monocular region in the display is attached to the binocular region, so that the stereogram represents an object which…

  7. LASIK monocular en pacientes adultos con ambliopía por anisometropía

    Directory of Open Access Journals (Sweden)

    Alejandro Tamez-Peña

    2017-09-01

    Conclusiones: La cirugía refractiva monocular en pacientes con ambliopía por anisometropía es una opción terapéutica segura y efectiva que ofrece resultados visuales satisfactorios, preservando o incluso mejorando la AVMC preoperatoria.

  8. Fast detection and modeling of human-body parts from monocular video

    NARCIS (Netherlands)

    Lao, W.; Han, Jungong; With, de P.H.N.; Perales, F.J.; Fisher, R.B.

    2009-01-01

    This paper presents a novel and fast scheme to detect different body parts in human motion. Using monocular video sequences, trajectory estimation and body modeling of moving humans are combined in a co-operating processing architecture. More specifically, for every individual person, features of

  9. Evaluating stereoscopic displays : both efficiency measures and perceived workload sensitive to manipulations in binocular disparity

    NARCIS (Netherlands)

    Beurden, van M.H.P.H.; IJsselsteijn, W.A.; Kort, de Y.A.W.; Woods, A.J.; Holliman, N.S.; Dodgson, N.A.

    2011-01-01

    Stereoscopic displays are known to offer a number of key advantages in visualizing complex 3D structures or datasets. The large majority of studies that focus on evaluating stereoscopic displays for professional applications use completion time and/or the percentage of correct answers to measure

  10. Low-cost universal stereoscopic virtual reality interfaces

    Science.gov (United States)

    Starks, Michael R.

    1993-09-01

    Low cost stereoscopic virtual reality hardware interfacing with nearly any computer and stereoscopic software running on any PC is described. Both are user configurable for serial or parallel ports. Stereo modeling, rendering, and interaction via gloves or 6D mice are provided. Low cost LCD Visors and external interfaces represent a breakthrough in convenience and price/performance. A complete system with software, Visor, interface and Power Glove is under $DOL500. StereoDrivers will interface with any system giving video sync (e.g., G of RGB). PC3D will access any standard serial port, while PCVR works with serial or parallel ports and glove devices. Model RF Visors detect magnetic fields and require no connection to the system. PGSI is a microprocessor control for the Power Glove and Visors. All interfaces will operate to 120 Hz with Model G Visors. The SpaceStations are demultiplexing, field doubling devices which convert field sequential video or graphics for stereo display with dual video projection or dual LCD SpaceHelmets.

  11. Perceptual asymmetry reveals neural substrates underlying stereoscopic transparency.

    Science.gov (United States)

    Tsirlin, Inna; Allison, Robert S; Wilcox, Laurie M

    2012-02-01

    We describe a perceptual asymmetry found in stereoscopic perception of overlaid random-dot surfaces. Specifically, the minimum separation in depth needed to perceptually segregate two overlaid surfaces depended on the distribution of dots across the surfaces. With the total dot density fixed, significantly larger inter-plane disparities were required for perceptual segregation of the surfaces when the front surface had fewer dots than the back surface compared to when the back surface was the one with fewer dots. We propose that our results reflect an asymmetry in the signal strength of the front and back surfaces due to the assignment of the spaces between the dots to the back surface by disparity interpolation. This hypothesis was supported by the results of two experiments designed to reduce the imbalance in the neuronal response to the two surfaces. We modeled the psychophysical data with a network of inter-neural connections: excitatory within-disparity and inhibitory across disparity, where the spread of disparity was modulated according to figure-ground assignment. These psychophysical and computational findings suggest that stereoscopic transparency depends on both inter-neural interactions of disparity-tuned cells and higher-level processes governing figure ground segregation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Some theoretical aspects of the design of stereoscopic television systems

    International Nuclear Information System (INIS)

    Jones, A.

    1980-03-01

    Several parameters which together specify the performance of a stereoscopic television system which has been demonstrated in reactors are investigated theoretically. These are: (1) the minimum resolvable depth interval in object space, (2) the region of space which can be displayed in three dimensions without causing undue eyestrain to the observer, (3) distortions which may arise in the display. The resulting equations form a basis from which operational stereocameras can be designed and a particular example is given, which also illustrates the relationships between the parameters. It is argued that the extent of the stereo region (parameter (2) above) predicted by previously published work is probably too large for closed circuit television inspection. This arises because the criterion used to determine the maximum tolerable screen parallax is too generous. An alternative, based upon the size of Panum's fusional area (a property of the observer's eye) is proposed. Preliminary experimental support for the proposal is given by measurements of the extent of the stereoscopic region using a number of observers. (author)

  13. Surgical approaches to complex vascular lesions: the use of virtual reality and stereoscopic analysis as a tool for resident and student education.

    Science.gov (United States)

    Agarwal, Nitin; Schmitt, Paul J; Sukul, Vishad; Prestigiacomo, Charles J

    2012-08-01

    Virtual reality training for complex tasks has been shown to be of benefit in fields involving highly technical and demanding skill sets. The use of a stereoscopic three-dimensional (3D) virtual reality environment to teach a patient-specific analysis of the microsurgical treatment modalities of a complex basilar aneurysm is presented. Three different surgical approaches were evaluated in a virtual environment and then compared to elucidate the best surgical approach. These approaches were assessed with regard to the line-of-sight, skull base anatomy and visualisation of the relevant anatomy at the level of the basilar artery and surrounding structures. Overall, the stereoscopic 3D virtual reality environment with fusion of multimodality imaging affords an excellent teaching tool for residents and medical students to learn surgical approaches to vascular lesions. Future studies will assess the educational benefits of this modality and develop a series of metrics for student assessments.

  14. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments.

    Science.gov (United States)

    Trujillo, Juan-Carlos; Munguia, Rodrigo; Guerra, Edmundo; Grau, Antoni

    2018-04-26

    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.

  15. Stereoscopic augmented reality with pseudo-realistic global illumination effects

    Science.gov (United States)

    de Sorbier, Francois; Saito, Hideo

    2014-03-01

    Recently, augmented reality has become very popular and has appeared in our daily life with gaming, guiding systems or mobile phone applications. However, inserting object in such a way their appearance seems natural is still an issue, especially in an unknown environment. This paper presents a framework that demonstrates the capabilities of Kinect for convincing augmented reality in an unknown environment. Rather than pre-computing a reconstruction of the scene like proposed by most of the previous method, we propose a dynamic capture of the scene that allows adapting to live changes of the environment. Our approach, based on the update of an environment map, can also detect the position of the light sources. Combining information from the environment map, the light sources and the camera tracking, we can display virtual objects using stereoscopic devices with global illumination effects such as diffuse and mirror reflections, refractions and shadows in real time.

  16. Head-coupled remote stereoscopic camera system for telepresence applications

    Science.gov (United States)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  17. Efficient Stereoscopic Video Matching and Map Reconstruction for a Wheeled Mobile Robot

    Directory of Open Access Journals (Sweden)

    Oscar Montiel-Ross

    2012-10-01

    Full Text Available This paper presents a novel method to achieve stereoscopic vision for mobile robot (MR navigation with the advantage of not needing camera calibration for depth (distance estimation measurements. It uses the concept of the adaptive candidate matching window for stereoscopic correspondence for block matching, resulting in improvements in efficiency and accuracy. An average of 40% of time reduction in the calculation process is obtained. All the algorithms for navigation, including the stereoscopic vision module, were implemented using an original computer architecture for the Virtex 5 FPGA, where a distributed multicore processor system was embedded and coordinated using the Message Passing Interface.

  18. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia

    OpenAIRE

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-01-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n?=?13) were asked to complete two psychophysical supra-threshold binoc...

  19. [Acute monocular loss of vision : Differential diagnostic considerations apart from the internistic etiological clarification].

    Science.gov (United States)

    Rickmann, A; Macek, M A; Szurman, P; Boden, K

    2017-08-03

    We report the case of acute painless monocular loss of vision in a 53-year-old man. An interdisciplinary etiological evaluation remained without pathological findings with respect to arterial branch occlusion. A reevaluation of the patient history led to a possible association with the administration of phosphodiesterase type 5 inhibitor (PDE5 inhibitor). A critical review of the literature on PDE5 inhibitor administration with ocular participation was performed.

  20. Distance Estimation by Fusing Radar and Monocular Camera with Kalman Filter

    OpenAIRE

    Feng, Yuxiang; Pickering, Simon; Chappell, Edward; Iravani, Pejman; Brace, Christian

    2017-01-01

    The major contribution of this paper is to propose a low-cost accurate distance estimation approach. It can potentially be used in driver modelling, accident avoidance and autonomous driving. Based on MATLAB and Python, sensory data from a Continental radar and a monocular dashcam were fused using a Kalman filter. Both sensors were mounted on a Volkswagen Sharan, performing repeated driving on a same route. The established system consists of three components, radar data processing, camera dat...

  1. A Case of Recurrent Transient Monocular Visual Loss after Receiving Sildenafil

    Directory of Open Access Journals (Sweden)

    Asaad Ghanem Ghanem

    2011-01-01

    Full Text Available A 53-year-old man was attended to the Clinic Ophthalmic Center, Mansoura University, Egypt, with recurrent transient monocular visual loss after receiving sildenafil citrate (Viagra for erectile dysfunction. Examination for possible risk factors revealed mild hypercholesterolemia. Family history showed that his father had suffered from bilateral nonarteritic anterior ischemic optic neuropathy (NAION. Physicians might look for arteriosclerotic risk factors and family history of NAION among predisposing risk factors before prescribing sildenafil erectile dysfunction drugs.

  2. A pilot study on pupillary and cardiovascular changes induced by stereoscopic video movies

    Directory of Open Access Journals (Sweden)

    Sugita Norihiro

    2007-10-01

    Full Text Available Abstract Background Taking advantage of developed image technology, it is expected that image presentation would be utilized to promote health in the field of medical care and public health. To accumulate knowledge on biomedical effects induced by image presentation, an essential prerequisite for these purposes, studies on autonomic responses in more than one physiological system would be necessary. In this study, changes in parameters of the pupillary light reflex and cardiovascular reflex evoked by motion pictures were examined, which would be utilized to evaluate the effects of images, and to avoid side effects. Methods Three stereoscopic video movies with different properties were field-sequentially rear-projected through two LCD projectors on an 80-inch screen. Seven healthy young subjects watched movies in a dark room. Pupillary parameters were measured before and after presentation of movies by an infrared pupillometer. ECG and radial blood pressure were continuously monitored. The maximum cross-correlation coefficient between heart rate and blood pressure, ρmax, was used as an index to evaluate changes in the cardiovascular reflex. Results Parameters of pupillary and cardiovascular reflexes changed differently after subjects watched three different video movies. Amplitudes of the pupillary light reflex, CR, increased when subjects watched two CG movies (movies A and D, while they did not change after watching a movie with the real scenery (movie R. The ρmax was significantly larger after presentation of the movie D. Scores of the questionnaire for subjective evaluation of physical condition increased after presentation of all movies, but their relationship with changes in CR and ρmax was different in three movies. Possible causes of these biomedical differences are discussed. Conclusion The autonomic responses were effective to monitor biomedical effects induced by image presentation. Further accumulation of data on multiple autonomic

  3. Potential hazards of viewing 3-D stereoscopic television, cinema and computer games: a review.

    Science.gov (United States)

    Howarth, Peter A

    2011-03-01

    The visual stimulus provided by a 3-D stereoscopic display differs from that of the real world because the image provided to each eye is produced on a flat surface. The distance from the screen to the eye remains fixed, providing a single focal distance, but the introduction of disparity between the images allows objects to be located geometrically in front of, or behind, the screen. Unlike in the real world, the stimulus to accommodation and the stimulus to convergence do not match. Although this mismatch is used positively in some forms of Orthoptic treatment, a number of authors have suggested that it could negatively lead to the development of asthenopic symptoms. From knowledge of the zone of clear, comfortable, single binocular vision one can predict that, for people with normal binocular vision, adverse symptoms will not be present if the discrepancy is small, but are likely if it is large, and that what constitutes 'large' and 'small' are idiosyncratic to the individual. The accommodation-convergence mismatch is not, however, the only difference between the natural and the artificial stimuli. In the former case, an object located in front of, or behind, a fixated object will not only be perceived as double if the images fall outside Panum's fusional areas, but it will also be defocused and blurred. In the latter case, however, it is usual for the producers of cinema, TV or computer game content to provide an image that is in focus over the whole of the display, and as a consequence diplopic images will be sharply in focus. The size of Panum's fusional area is spatial frequency-dependent, and because of this the high spatial frequencies present in the diplopic 3-D image will provide a different stimulus to the fusion system from that found naturally. © 2011 The College of Optometrists.

  4. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    Science.gov (United States)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  5. Lie group model neuromorphic geometric engine for real-time terrain reconstruction from stereoscopic aerial photos

    Science.gov (United States)

    Tsao, Thomas R.; Tsao, Doris

    1997-04-01

    In the 1980's, neurobiologist suggested a simple mechanism in primate visual cortex for maintaining a stable and invariant representation of a moving object. The receptive field of visual neurons has real-time transforms in response to motion, to maintain a stable representation. When the visual stimulus is changed due to motion, the geometric transform of the stimulus triggers a dual transform of the receptive field. This dual transform in the receptive fields compensates geometric variation in the stimulus. This process can be modelled using a Lie group method. The massive array of affine parameter sensing circuits will function as a smart sensor tightly coupled to the passive imaging sensor (retina). Neural geometric engine is a neuromorphic computing device simulating our Lie group model of spatial perception of primate's primal visual cortex. We have developed the computer simulation and experimented on realistic and synthetic image data, and performed a preliminary research of using analog VLSI technology for implementation of the neural geometric engine. We have benchmark tested on DMA's terrain data with their result and have built an analog integrated circuit to verify the computational structure of the engine. When fully implemented on ANALOG VLSI chip, we will be able to accurately reconstruct a 3D terrain surface in real-time from stereoscopic imagery.

  6. Influence of stereoscopic vision on task performance with an operating microscope

    NARCIS (Netherlands)

    Nibourg, Lisanne M.; Wanders, Wouter; Cornelissen, Frans W.; Koopmans, Steven A.

    PURPOSE: To determine the extent to which stereoscopic depth perception influences the performance of tasks executed under an operating microscope. SETTING: Laboratory of Experimental Ophthalmology, University Medical Center Groningen, the Netherlands. DESIGN: Experimental study. METHODS: Medical

  7. Relationship between Stereoscopic Vision, Visual Perception, and Microstructure Changes of Corpus Callosum and Occipital White Matter in the 4-Year-Old Very Low Birth Weight Children

    Directory of Open Access Journals (Sweden)

    Przemko Kwinta

    2015-01-01

    Full Text Available Aim. To assess the relationship between stereoscopic vision, visual perception, and microstructure of the corpus callosum (CC and occipital white matter, 61 children born with a mean birth weight of 1024 g (SD 270 g were subjected to detailed ophthalmologic evaluation, Developmental Test of Visual Perception (DTVP-3, and diffusion tensor imaging (DTI at the age of 4. Results. Abnormal stereoscopic vision was detected in 16 children. Children with abnormal stereoscopic vision had smaller CC (CC length: 53±6 mm versus 61±4 mm; p<0.01; estimated CC area: 314±106 mm2 versus 446±79 mm2; p<0.01 and lower fractional anisotropy (FA values in CC (FA value of rostrum/genu: 0.7±0.09 versus 0.79±0.07; p<0.01; FA value of CC body: 0.74±0.13 versus 0.82±0.09; p=0.03. We found a significant correlation between DTVP-3 scores, CC size, and FA values in rostrum and body. This correlation was unrelated to retinopathy of prematurity. Conclusions. Visual perceptive dysfunction in ex-preterm children without major sequelae of prematurity depends on more subtle changes in the brain microstructure, including CC. Role of interhemispheric connections in visual perception might be more complex than previously anticipated.

  8. Monocular Depth Perception and Robotic Grasping of Novel Objects

    Science.gov (United States)

    2009-06-01

    obtain its full 3D shape, and applies even to textureless, translucent or reflective objects on which standard stereo 3D reconstruction fares poorly. We...purple) in image A. 3.3.4 Phantom planes This cue enforces occlusion constraints across multiple cameras. Concretely , each small plane (superpixel...needing to obtain its full 3D shape, and applies even to textureless, translucent or reflective objects on which standard stereo 3D reconstruction

  9. Evaluating stereoscopic displays: both efficiency measures and perceived workload sensitive to manipulations in binocular disparity

    Science.gov (United States)

    van Beurden, Maurice H. P. H.; Ijsselsteijn, Wijnand A.; de Kort, Yvonne A. W.

    2011-03-01

    Stereoscopic displays are known to offer a number of key advantages in visualizing complex 3D structures or datasets. The large majority of studies that focus on evaluating stereoscopic displays for professional applications use completion time and/or the percentage of correct answers to measure potential performance advantages. However, completion time and accuracy may not fully reflect all the benefits of stereoscopic displays. In this paper, we argue that perceived workload is an additional valuable indicator reflecting the extent to which users can benefit from using stereoscopic displays. We performed an experiment in which participants were asked to perform a visual path-tracing task within a convoluted 3D wireframe structure, varying in level of complexity of the visualised structure and level of disparity of the visualisation. The results showed that an optimal performance (completion time, accuracy and workload), depend both on task difficulty and disparity level. Stereoscopic disparity revealed a faster and more accurate task performance, whereas we observed a trend that performance on difficult tasks stands to benefit more from higher levels of disparity than performance on easy tasks. Perceived workload (as measured using the NASA-TLX) showed a similar response pattern, providing evidence that perceived workload is sensitive to variations in disparity as well as task difficulty. This suggests that perceived workload could be a useful concept, in addition to standard performance indicators, in characterising and measuring human performance advantages when using stereoscopic displays.

  10. A Review on Stereoscopic 3D: Home Entertainment for the Twenty First Century

    Science.gov (United States)

    Karajeh, Huda; Maqableh, Mahmoud; Masa'deh, Ra'ed

    2014-12-01

    In the last few years, stereoscopic developed very rapidly and employed in many different fields such as entertainment. Due to the importance of entertainment aspect of stereoscopic 3D (S3D) applications, a review of the current state of S3D development in entertainment technology is conducted. In this paper, a novel survey of the stereoscopic entertainment aspects is presented by discussing the significant development of a 3D cinema, the major development of 3DTV, the issues related to 3D video content and 3D video games. Moreover, we reviewed some problems that can be caused in the viewers' visual system from watching stereoscopic contents. Some stereoscopic viewers are not satisfied as they are frustrated from wearing glasses, have visual fatigue, complain from unavailability of 3D contents, and/or complain from some sickness. Therefore, we will discuss stereoscopic visual discomfort and to what extend the viewer will have an eye fatigue while watching 3D contents or playing 3D games. The suggested solutions in the literature for this problem are discussed.

  11. Cirurgia monocular para esotropias de grande ângulo: um novo paradigma Monocular surgery for large-angle esotropias: a new paradigm

    Directory of Open Access Journals (Sweden)

    Edmilson Gigante

    2009-02-01

    Full Text Available OBJETIVO: Demonstrar a viabilidade da cirurgia monocular no tratamento das esotropias de grande ângulo, praticando-se amplos recuos do reto medial (6 a 10 mm e grandes ressecções do reto lateral (8 a 10 mm. MÉTODOS: Foram operados, com anestesia geral e sem reajustes per ou pósoperatórios, 46 pacientes com esotropias de 50δ ou mais, relativamente comitantes. Os métodos utilizados para refratometria, medida da acuidade visual e do ângulo de desvio, foram os, tradicionalmente, utilizados em estrabologia. No pós-operatório, além das medidas na posição primária do olhar, foi feita uma avaliação da motilidade do olho operado, em adução e em abdução. RESULTADOS: Foram considerados quatro grupos de estudo, correspondendo a quatro períodos de tempo: uma semana, seis meses, dois anos e quatro a sete anos. Os resultados para o ângulo de desvio pós-cirúrgico foram compatíveis com os da literatura em geral e mantiveram-se estáveis ao longo do tempo. A motilidade do olho operado apresentou pequena limitação em adução e nenhuma em abdução, contrariando o encontrado na literatura estrabológica. Comparando os resultados de adultos com os de crianças e de amblíopes com não amblíopes, não foram encontradas diferenças estatisticamente significativas entre eles. CONCLUSÃO:Em face dos resultados encontrados, entende-se ser possível afirmar que a cirurgia monocular de recuo-ressecção pode ser considerada opção viável para o tratamento das esotropias de grande ângulo, tanto para adultos quanto para crianças, bem como para amblíopes e não amblíopes.PURPOSE: To demonstrate the feasibility of monocular surgery in the treatment of large-angle esotropias through large recessions of the medial rectus (6 to 10 mm and large resections of the lateral rectus (8 to 10 mm. METHODS: 46 patients were submitted to surgery. They had esotropias of 50Δor more that were relatively comitant. The patients were operated under general

  12. Real-time Stereoscopic 3D for E-Robotics Learning

    Directory of Open Access Journals (Sweden)

    Richard Y. Chiou

    2011-02-01

    Full Text Available Following the design and testing of a successful 3-Dimensional surveillance system, this 3D scheme has been implemented into online robotics learning at Drexel University. A real-time application, utilizing robot controllers, programmable logic controllers and sensors, has been developed in the “MET 205 Robotics and Mechatronics” class to provide the students with a better robotic education. The integration of the 3D system allows the students to precisely program the robot and execute functions remotely. Upon the students’ recommendation, polarization has been chosen to be the main platform behind the 3D robotic system. Stereoscopic calculations are carried out for calibration purposes to display the images with the highest possible comfort-level and 3D effect. The calculations are further validated by comparing the results with students’ evaluations. Due to the Internet-based feature, multiple clients have the opportunity to perform the online automation development. In the future, students, in different universities, will be able to cross-control robotic components of different types around the world. With the development of this 3D ERobotics interface, automation resources and robotic learning can be shared and enriched regardless of location.

  13. Measurements of steady flow through a bileaflet mechanical heart valve using stereoscopic PIV.

    Science.gov (United States)

    Hutchison, Chris; Sullivan, Pierre; Ethier, C Ross

    2011-03-01

    Computational modeling of bileaflet mechanical heart valve (BiMHV) flow requires experimentally validated datasets and improved knowledge of BiMHV fluid mechanics. In this study, flow was studied downstream of a model BiMHV in an axisymmetric aortic sinus using stereoscopic particle image velocimetry. The inlet flow was steady and the Reynolds number based on the aortic diameter was 7600. Results showed the out-of-plane velocity was of similar magnitude as the transverse velocity. Although additional studies are needed for confirmation, analysis of the out-of-plane velocity showed the possible presence of a four-cell streamwise vortex structure in the mean velocity field. Spatial data for all six Reynolds stress components were obtained. Reynolds normal stress profiles revealed similarities between the central jet and free jets. These findings are important to BiMHV flow modeling, though clinical relevance is limited due to the idealized conditions chosen. To this end, the dataset is publicly available for CFD validation purposes.

  14. Monocular Vision-Based Robot Localization and Target Tracking

    Directory of Open Access Journals (Sweden)

    Bing-Fei Wu

    2011-01-01

    Full Text Available This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement.

  15. Stereoscopic motion analysis in densely packed clusters: 3D analysis of the shimmering behaviour in Giant honey bees.

    Science.gov (United States)

    Kastberger, Gerald; Maurer, Michael; Weihmann, Frank; Ruether, Matthias; Hoetzl, Thomas; Kranner, Ilse; Bischof, Horst

    2011-02-08

    The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time. The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study aspects of other mass phenomena that

  16. Stereoscopic motion analysis in densely packed clusters: 3D analysis of the shimmering behaviour in Giant honey bees

    Directory of Open Access Journals (Sweden)

    Hoetzl Thomas

    2011-02-01

    Full Text Available Abstract Background The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Results Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb of individual bees over time. Conclusions The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method

  17. Stereoscopic vision in the absence of the lateral occipital cortex.

    Directory of Open Access Journals (Sweden)

    Jenny C A Read

    2010-09-01

    Full Text Available Both dorsal and ventral cortical visual streams contain neurons sensitive to binocular disparities, but the two streams may underlie different aspects of stereoscopic vision. Here we investigate stereopsis in the neurological patient D.F., whose ventral stream, specifically lateral occipital cortex, has been damaged bilaterally, causing profound visual form agnosia. Despite her severe damage to cortical visual areas, we report that DF's stereo vision is strikingly unimpaired. She is better than many control observers at using binocular disparity to judge whether an isolated object appears near or far, and to resolve ambiguous structure-from-motion. DF is, however, poor at using relative disparity between features at different locations across the visual field. This may stem from a difficulty in identifying the surface boundaries where relative disparity is available. We suggest that the ventral processing stream may play a critical role in enabling healthy observers to extract fine depth information from relative disparities within one surface or between surfaces located in different parts of the visual field.

  18. Development of a stereoscopic three-dimensional drawing application

    Science.gov (United States)

    Carver, Donald E.; McAllister, David F.

    1991-08-01

    With recent advances in 3-D technology, computer users have the opportunity to work within a natural 3-D environment; a flat panel LCD computer display of this type, the DTI-100M made by Dimension Technologies, Inc., recently went on the market. In a joint venture between DTI and NCSU, an object-oriented 3-D drawing application, 3-D Draw, was developed to address some issues of human interface design for interactive stereo drawing applications. The focus of this paper is to determine some of the procedures a user would naturally expect to follow while working within a true 3-D environment. The paper discusses (1) the interface between the Macintosh II and DTI-100M during implementation of 3-D Draw, including stereo cursor development and presentation of current 2-D systems, with an additional `depth'' parameter, in the 3-D world, (2) problems in general for human interface into the 3-D environment, and (3) necessary functions and/or problems in developing future stereoscopic 3-D operating systems/tools.

  19. Stereoscopic, thermal, and true deep cumulus cloud top heights

    Science.gov (United States)

    Llewellyn-Jones, D. T.; Corlett, G. K.; Lawrence, S. P.; Remedios, J. J.; Sherwood, S. C.; Chae, J.; Minnis, P.; McGill, M.

    2004-05-01

    We compare cloud-top height estimates from several sensors: thermal tops from GOES-8 and MODIS, stereoscopic tops from MISR, and directly measured heights from the Goddard Cloud Physics Lidar on board the ER-2, all collected during the CRYSTAL-FACE field campaign. Comparisons reveal a persistent 1-2 km underestimation of cloud-top heights by thermal imagery, even when the finite optical extinctions near cloud top and in thin overlying cirrus are taken into account. The most severe underestimates occur for the tallest clouds. The MISR "best-sinds" and lidar estimates disagree in very similar ways with thermally estimated tops, which we take as evidence of excellent performance by MISR. Encouraged by this, we use MISR to examine variations in cloud penetration and thermal top height errors in several locations of tropical deep convection over multiple seasons. The goals of this are, first, to learn how cloud penetration depends on the near-tropopause environment; and second, to gain further insight into the mysterious underestimation of tops by thermal imagery.

  20. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

    OpenAIRE

    Mur-Artal, Raul; Tardos, Juan D.

    2016-01-01

    We present ORB-SLAM2 a complete SLAM system for monocular, stereo and RGB-D cameras, including map reuse, loop closing and relocalization capabilities. The system works in real-time on standard CPUs in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end based on bundle adjustment with monocular and stereo observations allows for accurate trajectory estimation with metric scale. Our syst...

  1. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    Science.gov (United States)

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  2. Digital stereoscopic convergence where video games and movies for the home user meet

    Science.gov (United States)

    Schur, Ethan

    2009-02-01

    Today there is a proliferation of stereoscopic 3D display devices, 3D content, and 3D enabled video games. As we in the S-3D community bring stereoscopic 3D to the home user we have a real opportunity of using stereoscopic 3D to bridge the gap between exciting immersive games and home movies. But to do this, we cannot limit ourselves to current conceptions of gaming and movies. We need, for example, to imagine a movie that is fully rendered using avatars in a stereoscopic game environment. Or perhaps to imagine a pervasive drama where viewers can play too and become an essential part of the drama - whether at home or on the go on a mobile platform. Stereoscopic 3D is the "glue" that will bind these video and movie concepts together. As users feel more immersed, the lines between current media will blur. This means that we have the opportunity to shape the way that we, as humans, view and interact with each other, our surroundings and our most fundamental art forms. The goal of this paper is to stimulate conversation and further development on expanding the current gaming and home theatre infrastructures to support greatly-enhanced experiential entertainment.

  3. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    Science.gov (United States)

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  4. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    Directory of Open Access Journals (Sweden)

    Jin-Chun Piao

    2017-11-01

    Full Text Available Simultaneous localization and mapping (SLAM is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  5. Monocular tool control, eye dominance, and laterality in New Caledonian crows.

    Science.gov (United States)

    Martinho, Antone; Burns, Zackory T; von Bayern, Auguste M P; Kacelnik, Alex

    2014-12-15

    Tool use, though rare, is taxonomically widespread, but morphological adaptations for tool use are virtually unknown. We focus on the New Caledonian crow (NCC, Corvus moneduloides), which displays some of the most innovative tool-related behavior among nonhumans. One of their major food sources is larvae extracted from burrows with sticks held diagonally in the bill, oriented with individual, but not species-wide, laterality. Among possible behavioral and anatomical adaptations for tool use, NCCs possess unusually wide binocular visual fields (up to 60°), suggesting that extreme binocular vision may facilitate tool use. Here, we establish that during natural extractions, tool tips can only be viewed by the contralateral eye. Thus, maintaining binocular view of tool tips is unlikely to have selected for wide binocular fields; the selective factor is more likely to have been to allow each eye to see far enough across the midsagittal line to view the tool's tip monocularly. Consequently, we tested the hypothesis that tool side preference follows eye preference and found that eye dominance does predict tool laterality across individuals. This contrasts with humans' species-wide motor laterality and uncorrelated motor-visual laterality, possibly because bill-held tools are viewed monocularly and move in concert with eyes, whereas hand-held tools are visible to both eyes and allow independent combinations of eye preference and handedness. This difference may affect other models of coordination between vision and mechanical control, not necessarily involving tools. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    Science.gov (United States)

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-01-01

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143

  7. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Science.gov (United States)

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  8. An Analytical Measuring Rectification Algorithm of Monocular Systems in Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Deshi Li

    2016-01-01

    Full Text Available Range estimation is crucial for maintaining a safe distance, in particular for vision navigation and localization. Monocular autonomous vehicles are appropriate for outdoor environment due to their mobility and operability. However, accurate range estimation using vision system is challenging because of the nonholonomic dynamics and susceptibility of vehicles. In this paper, a measuring rectification algorithm for range estimation under shaking conditions is designed. The proposed method focuses on how to estimate range using monocular vision when a shake occurs and the algorithm only requires the pose variations of the camera to be acquired. Simultaneously, it solves the problem of how to assimilate results from different kinds of sensors. To eliminate measuring errors by shakes, we establish a pose-range variation model. Afterwards, the algebraic relation between distance increment and a camera’s poses variation is formulated. The pose variations are presented in the form of roll, pitch, and yaw angle changes to evaluate the pixel coordinate incensement. To demonstrate the superiority of our proposed algorithm, the approach is validated in a laboratory environment using Pioneer 3-DX robots. The experimental results demonstrate that the proposed approach improves in the range accuracy significantly.

  9. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Yanhua Jiang

    2014-09-01

    Full Text Available This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.

  10. Usage of stereoscopic visualization in the learning contents of rotational motion.

    Science.gov (United States)

    Matsuura, Shu

    2013-01-01

    Rotational motion plays an essential role in physics even at an introductory level. In addition, the stereoscopic display of three-dimensional graphics includes is advantageous for the presentation of rotational motions, particularly for depth recognition. However, the immersive visualization of rotational motion has been known to lead to dizziness and even nausea for some viewers. Therefore, the purpose of this study is to examine the onset of nausea and visual fatigue when learning rotational motion through the use of a stereoscopic display. The findings show that an instruction method with intermittent exposure of the stereoscopic display and a simplification of its visual components reduced the onset of nausea and visual fatigue for the viewers, which maintained the overall effect of instantaneous spatial recognition.

  11. Doing Textiles Experiments in Game-Based Virtual Reality: A Design of the Stereoscopic Chemical Laboratory (SCL) for Textiles Education

    Science.gov (United States)

    Lau, Kung Wong; Kan, Chi Wai; Lee, Pui Yuen

    2017-01-01

    Purpose: The purpose of this paper is to discuss the use of stereoscopic virtual technology in textile and fashion studies in particular to the area of chemical experiment. The development of a designed virtual platform, called Stereoscopic Chemical Laboratory (SCL), is introduced. Design/methodology/approach: To implement the suggested…

  12. a Variant of Lsd-Slam Capable of Processing High-Speed Low-Framerate Monocular Datasets

    Science.gov (United States)

    Schmid, S.; Fritsch, D.

    2017-11-01

    We develop a new variant of LSD-SLAM, called C-LSD-SLAM, which is capable of performing monocular tracking and mapping in high-speed low-framerate situations such as those of the KITTI datasets. The methods used here are robust against the influence of erronously triangulated points near the epipolar direction, which otherwise causes tracking divergence.

  13. Charles Miller Fisher: the 65th anniversary of the publication of his groundbreaking study "Transient Monocular Blindness Associated with Hemiplegia".

    Science.gov (United States)

    Araújo, Tiago Fernando Souza de; Lange, Marcos; Zétola, Viviane H; Massaro, Ayrton; Teive, Hélio A G

    2017-10-01

    Charles Miller Fisher is considered the father of modern vascular neurology and one of the giants of neurology in the 20th century. This historical review emphasizes Prof. Fisher's magnificent contribution to vascular neurology and celebrates the 65th anniversary of the publication of his groundbreaking study, "Transient Monocular Blindness Associated with Hemiplegia."

  14. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    International Nuclear Information System (INIS)

    Lee, Jung Uk; Sun, Ju Young; Won, Mooncheol

    2013-01-01

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner

  15. A method of real-time detection for distant moving obstacles by monocular vision

    Science.gov (United States)

    Jia, Bao-zhi; Zhu, Ming

    2013-12-01

    In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.

  16. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Uk [Samsung Electroics, Suwon (Korea, Republic of); Sun, Ju Young; Won, Mooncheol [Chungnam Nat' l Univ., Daejeon (Korea, Republic of)

    2013-12-15

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner.

  17. Application of stereo-imaging technology to medical field.

    Science.gov (United States)

    Nam, Kyoung Won; Park, Jeongyun; Kim, In Young; Kim, Kwang Gi

    2012-09-01

    There has been continuous development in the area of stereoscopic medical imaging devices, and many stereoscopic imaging devices have been realized and applied in the medical field. In this article, we review past and current trends pertaining to the application stereo-imaging technologies in the medical field. We describe the basic principles of stereo vision and visual issues related to it, including visual discomfort, binocular disparities, vergence-accommodation mismatch, and visual fatigue. We also present a brief history of medical applications of stereo-imaging techniques, examples of recently developed stereoscopic medical devices, and patent application trends as they pertain to stereo-imaging medical devices. Three-dimensional (3D) stereo-imaging technology can provide more realistic depth perception to the viewer than conventional two-dimensional imaging technology. Therefore, it allows for a more accurate understanding and analysis of the morphology of an object. Based on these advantages, the significance of stereoscopic imaging in the medical field increases in accordance with the increase in the number of laparoscopic surgeries, and stereo-imaging technology plays a key role in the diagnoses of the detailed morphologies of small biological specimens. The application of 3D stereo-imaging technology to the medical field will help improve surgical accuracy, reduce operation times, and enhance patient safety. Therefore, it is important to develop more enhanced stereoscopic medical devices.

  18. ESTABLISHING A STEREOSCOPIC TECHNIQUE FOR DETERMINING THE KINEMATIC PROPERTIES OF SOLAR WIND TRANSIENTS BASED ON A GENERALIZED SELF-SIMILARLY EXPANDING CIRCULAR GEOMETRY

    International Nuclear Information System (INIS)

    Davies, J. A.; Perry, C. H.; Harrison, R. A.; Trines, R. M. G. M.; Lugaz, N.; Möstl, C.; Liu, Y. D.; Steed, K.

    2013-01-01

    The twin-spacecraft STEREO mission has enabled simultaneous white-light imaging of the solar corona and inner heliosphere from multiple vantage points. This has led to the development of numerous stereoscopic techniques to investigate the three-dimensional structure and kinematics of solar wind transients such as coronal mass ejections (CMEs). Two such methods—triangulation and the tangent to a sphere—can be used to determine time profiles of the propagation direction and radial distance (and thereby radial speed) of a solar wind transient as it travels through the inner heliosphere, based on its time-elongation profile viewed by two observers. These techniques are founded on the assumption that the transient can be characterized as a point source (fixed φ, FP, approximation) or a circle attached to Sun-center (harmonic mean, HM, approximation), respectively. These geometries constitute extreme descriptions of solar wind transients, in terms of their cross-sectional extent. Here, we present the stereoscopic expressions necessary to derive propagation direction and radial distance/speed profiles of such transients based on the more generalized self-similar expansion (SSE) geometry, for which the FP and HM geometries form the limiting cases; our implementation of these equations is termed the stereoscopic SSE method. We apply the technique to two Earth-directed CMEs from different phases of the STEREO mission, the well-studied event of 2008 December and a more recent event from 2012 March. The latter CME was fast, with an initial speed exceeding 2000 km s –1 , and highly geoeffective, in stark contrast to the slow and ineffectual 2008 December CME

  19. The Impact of Stereoscopic Imagery and Motion on Anatomical Structure Recognition and Visual Attention Performance

    Science.gov (United States)

    Remmele, Martin; Schmidt, Elena; Lingenfelder, Melissa; Martens, Andreas

    2018-01-01

    Gross anatomy is located in a three-dimensional space. Visualizing aspects of structures in gross anatomy education should aim to provide information that best resembles their original spatial proportions. Stereoscopic three-dimensional imagery might offer possibilities to implement this aim, though some research has revealed potential impairments…

  20. What is 3D good for? A review of human performance on stereoscopic 3D displays

    Science.gov (United States)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  1. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    Science.gov (United States)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  2. Organizational Learning Goes Virtual?: A Study of Employees' Learning Achievement in Stereoscopic 3D Virtual Reality

    Science.gov (United States)

    Lau, Kung Wong

    2015-01-01

    Purpose: This study aims to deepen understanding of the use of stereoscopic 3D technology (stereo3D) in facilitating organizational learning. The emergence of advanced virtual technologies, in particular to the stereo3D virtual reality, has fundamentally changed the ways in which organizations train their employees. However, in academic or…

  3. Novel microscope-integrated stereoscopic heads-up display for intrasurgical optical coherence tomography

    Science.gov (United States)

    Shen, Liangbo; Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Waterman, Gar; Hahn, Paul S.; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.

    2016-01-01

    Intra-operative optical coherence tomography (OCT) requires a display technology which allows surgeons to visualize OCT data without disrupting surgery. Previous research and commercial intrasurgical OCT systems have integrated heads-up display (HUD) systems into surgical microscopes to provide monoscopic viewing of OCT data through one microscope ocular. To take full advantage of our previously reported real-time volumetric microscope-integrated OCT (4D MIOCT) system, we describe a stereoscopic HUD which projects a stereo pair of OCT volume renderings into both oculars simultaneously. The stereoscopic HUD uses a novel optical design employing spatial multiplexing to project dual OCT volume renderings utilizing a single micro-display. The optical performance of the surgical microscope with the HUD was quantitatively characterized and the addition of the HUD was found not to substantially effect the resolution, field of view, or pincushion distortion of the operating microscope. In a pilot depth perception subject study, five ophthalmic surgeons completed a pre-set dexterity task with 50.0% (SD = 37.3%) higher success rate and in 35.0% (SD = 24.8%) less time on average with stereoscopic OCT vision compared to monoscopic OCT vision. Preliminary experience using the HUD in 40 vitreo-retinal human surgeries by five ophthalmic surgeons is reported, in which all surgeons reported that the HUD did not alter their normal view of surgery and that live surgical maneuvers were readily visible in displayed stereoscopic OCT volumes. PMID:27231616

  4. Interaksi pada Museum Virtual Menggunakan Pengindera Tangan dengan Penyajian Stereoscopic 3D

    Directory of Open Access Journals (Sweden)

    Gary Almas Samaita

    2017-01-01

    Full Text Available Kemajuan teknologi menjadikan museum mengembangkan cara penyajian koleksinya. Salah satu teknologi yang diadaptasi dalam penyajian museum virtual adalah Virtual Reality (VR dengan stereoscopic 3D. Sayangnya, museum virtual dengan teknik penyajian stereoscopic masih menggunakan keyboard dan mouse sebagai perangkat interaksi. Penelitian ini bertujuan untuk merancang dan menerapkan interaksi dengan pengindera tangan pada museum virtual dengan penyajian stereoscopic 3D. Museum virtual divisualisasikan dengan teknik stereoscopic side-by-side melalui Head Mounting Display (HMD berbasis Android. HMD juga memiliki fungsi head tracking dengan membaca orientasi kepala. Interaksi tangan diterapkan dengan menggunakan pengindera tangan yang ditempatkan pada HMD. Karena pengindera tangan tidak didukung oleh HMD berbasis Android, maka digunakan server sebagai perantara HMD dan pengindera tangan. Setelah melalui pengujian, diketahui bahwa rata-rata confidence rate dari pembacaan pengindera tangan pada pola tangan untuk memicu interaksi adalah sebesar 99,92% dengan rata-rata efektifitas 92,61%. Uji ketergunaan juga dilakukan dengan pendasaran ISO/IEC 9126-4 untuk mengukur efektifitas, efisiensi, dan kepuasan pengguna dari sistem yang dirancang dengan meminta partisipan untuk melakukan 9 tugas yang mewakili interaksi tangan dalam museum virtual. Hasil pengujian menunjukkan bahwa semua pola tangan yang dirancang dapat dilakukan oleh partisipan meskipun pola tangan dinilai cukup sulit dilakukan. Melalui kuisioner diketahui bahwa total 86,67% partisipan setuju bahwa interaksi tangan memberikan pengalaman baru dalam menikmati museum virtual.

  5. An exploration of the initial effects of stereoscopic displays on optometric parameters

    NARCIS (Netherlands)

    Fortuin, M.F.; Lambooij, M.T.M.; IJsselsteijn, W.A.; Heynderickx, I.E.J.; Edgar, D.F.; Evans, B.J.W.

    2011-01-01

    PURPOSE: To compare the effect on optometric variables of reading text presented in 2-D and 3-D on two types of stereoscopic display. METHODS: This study measured changes in binocular visual acuity, fixation disparity, aligning prism, heterophoria, horizontal fusional reserves, prism facility and

  6. Novel microscope-integrated stereoscopic heads-up display for intrasurgical optical coherence tomography.

    Science.gov (United States)

    Shen, Liangbo; Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Waterman, Gar; Hahn, Paul S; Kuo, Anthony N; Toth, Cynthia A; Izatt, Joseph A

    2016-05-01

    Intra-operative optical coherence tomography (OCT) requires a display technology which allows surgeons to visualize OCT data without disrupting surgery. Previous research and commercial intrasurgical OCT systems have integrated heads-up display (HUD) systems into surgical microscopes to provide monoscopic viewing of OCT data through one microscope ocular. To take full advantage of our previously reported real-time volumetric microscope-integrated OCT (4D MIOCT) system, we describe a stereoscopic HUD which projects a stereo pair of OCT volume renderings into both oculars simultaneously. The stereoscopic HUD uses a novel optical design employing spatial multiplexing to project dual OCT volume renderings utilizing a single micro-display. The optical performance of the surgical microscope with the HUD was quantitatively characterized and the addition of the HUD was found not to substantially effect the resolution, field of view, or pincushion distortion of the operating microscope. In a pilot depth perception subject study, five ophthalmic surgeons completed a pre-set dexterity task with 50.0% (SD = 37.3%) higher success rate and in 35.0% (SD = 24.8%) less time on average with stereoscopic OCT vision compared to monoscopic OCT vision. Preliminary experience using the HUD in 40 vitreo-retinal human surgeries by five ophthalmic surgeons is reported, in which all surgeons reported that the HUD did not alter their normal view of surgery and that live surgical maneuvers were readily visible in displayed stereoscopic OCT volumes.

  7. Stereoscopic PIV and POD applied to the far turbulent axisymmetric jet

    DEFF Research Database (Denmark)

    Wähnström, Maja; George, William K.; Meyer, Knud Erik

    2006-01-01

    here applies stereoscopic PIV to the far field of the same jet in which the mode-2 phenomenon was first noticed. Indeed azimuthal mode-1 is maximal if all three velocity components are considered, so the new findings are confirmed. This work also addresses a number of outstanding issues from all...

  8. Measurement and Image Processing Techniques for Particle Image Velocimetry Using Solid-Phase Carbon Dioxide

    Science.gov (United States)

    2014-03-27

    stereoscopic PIV: the angular displacement configuration and the translation configuration. The angular displacement configuration is most commonly used today...images were processed using ImageJ, an open-source, Java -based image processing software available from the National Institute of Health (NIH). The

  9. Three-dimensional temporally resolved measurements of turbulence-flame interactions using orthogonal-plane cinema-stereoscopic PIV

    Energy Technology Data Exchange (ETDEWEB)

    Steinberg, Adam Michael; Driscoll, James F. [University of Michigan, Department of Aerospace Engineering, Ann Arbor, MI (United States); Ceccio, Steven L. [University of Michigan, Department of Mechanical Engineering, Ann Arbor, MI (United States)

    2009-09-15

    A new orthogonal-plane cinema-stereoscopic particle image velocimetry (OPCS-PIV) diagnostic has been used to measure the dynamics of three-dimensional turbulence-flame interactions. The diagnostic employed two orthogonal PIV planes, with one aligned perpendicular and one aligned parallel to the streamwise flow direction. In the plane normal to the flow, temporally resolved slices of the nine-component velocity gradient tensor were determined using Taylor's hypothesis. Volumetric reconstruction of the 3D turbulence was performed using these slices. The PIV plane parallel to the streamwise flow direction was then used to measure the evolution of the turbulence; the path and strength of 3D turbulent structures as they interacted with the flame were determined from their image in this second plane. Structures of both vorticity and strain-rate magnitude were extracted from the flow. The geometry of these structures agreed well with predictions from direct numerical simulations. The interaction of turbulent structures with the flame also was observed. In three dimensions, these interactions had complex geometries that could not be reflected in either planar measurements or simple flame-vortex configurations. (orig.)

  10. Stereoscopic (3D) versus monoscopic (2D) laparoscopy: comparative study of performance using advanced HD optical systems in a surgical simulator model.

    Science.gov (United States)

    Schoenthaler, Martin; Schnell, Daniel; Wilhelm, Konrad; Schlager, Daniel; Adams, Fabian; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz

    2016-04-01

    To compare task performances of novices and experts using advanced high-definition 3D versus 2D optical systems in a surgical simulator model. Fifty medical students (novices in laparoscopy) were randomly assigned to perform five standardized tasks adopted from the Fundamentals of Laparoscopic Surgery (FLS) curriculum in either a 2D or 3D laparoscopy simulator system. In addition, eight experts performed the same tasks. Task performances were evaluated using a validated scoring system of the SAGES/FLS program. Participants were asked to rate 16 items in a questionnaire. Overall task performance of novices was significantly better using stereoscopic visualization. Superiority of performances in 3D reached a level of significance for tasks peg transfer and precision cutting. No significant differences were noted in performances of experts when using either 2D or 3D. Overall performances of experts compared to novices were better in both 2D and 3D. Scorings in the questionnaires showed a tendency toward lower scores in the group of novices using 3D. Stereoscopic imaging significantly improves performance of laparoscopic phantom tasks of novices. The current study confirms earlier data based on a large number of participants and a standardized task and scoring system. Participants felt more confident and comfortable when using a 3D laparoscopic system. However, the question remains open whether these findings translate into faster and safer operations in a clinical setting.

  11. Cross-orientation masking in human color vision: application of a two-stage model to assess dichoptic and monocular sources of suppression.

    Science.gov (United States)

    Kim, Yeon Jin; Gheiratmand, Mina; Mullen, Kathy T

    2013-05-28

    Cross-orientation masking (XOM) occurs when the detection of a test grating is masked by a superimposed grating at an orthogonal orientation, and is thought to reveal the suppressive effects mediating contrast normalization. Medina and Mullen (2009) reported that XOM was greater for chromatic than achromatic stimuli at equivalent spatial and temporal frequencies. Here we address whether the greater suppression found in binocular color vision originates from a monocular or interocular site, or both. We measure monocular and dichoptic masking functions for red-green color contrast and achromatic contrast at three different spatial frequencies (0.375, 0.75, and 1.5 cpd, 2 Hz). We fit these functions with a modified two-stage masking model (Meese & Baker, 2009) to extract the monocular and interocular weights of suppression. We find that the weight of monocular suppression is significantly higher for color than achromatic contrast, whereas dichoptic suppression is similar for both. These effects are invariant across spatial frequency. We then apply the model to the binocular masking data using the measured values of the monocular and interocular sources of suppression and show that these are sufficient to account for color binocular masking. We conclude that the greater strength of chromatic XOM has a monocular origin that transfers through to the binocular site.

  12. Disambiguation of Necker cube rotation by monocular and binocular depth cues: relative effectiveness for establishing long-term bias.

    Science.gov (United States)

    Harrison, Sarah J; Backus, Benjamin T; Jain, Anshul

    2011-05-11

    The apparent direction of rotation of perceptually bistable wire-frame (Necker) cubes can be conditioned to depend on retinal location by interleaving their presentation with cubes that are disambiguated by depth cues (Haijiang, Saunders, Stone, & Backus, 2006; Harrison & Backus, 2010a). The long-term nature of the learned bias is demonstrated by resistance to counter-conditioning on a consecutive day. In previous work, either binocular disparity and occlusion, or a combination of monocular depth cues that included occlusion, internal occlusion, haze, and depth-from-shading, were used to control the rotation direction of disambiguated cubes. Here, we test the relative effectiveness of these two sets of depth cues in establishing the retinal location bias. Both cue sets were highly effective in establishing a perceptual bias on Day 1 as measured by the perceived rotation direction of ambiguous cubes. The effect of counter-conditioning on Day 2, on perceptual outcome for ambiguous cubes, was independent of whether the cue set was the same or different as Day 1. This invariance suggests that a common neural population instantiates the bias for rotation direction, regardless of the cue set used. However, in a further experiment where only disambiguated cubes were presented on Day 1, perceptual outcome of ambiguous cubes during Day 2 counter-conditioning showed that the monocular-only cue set was in fact more effective than disparity-plus-occlusion for causing long-term learning of the bias. These results can be reconciled if the conditioning effect of Day 1 ambiguous trials in the first experiment is taken into account (Harrison & Backus, 2010b). We suggest that monocular disambiguation leads to stronger bias either because it more strongly activates a single neural population that is necessary for perceiving rotation, or because ambiguous stimuli engage cortical areas that are also engaged by monocularly disambiguated stimuli but not by disparity-disambiguated stimuli

  13. An Approach for Environment Mapping and Control of Wall Follower Cellbot Through Monocular Vision and Fuzzy System

    OpenAIRE

    Farias, Karoline de M.; Rodrigues Junior, WIlson Leal; Bezerra Neto, Ranulfo P.; Rabelo, Ricardo A. L.; Santana, Andre M.

    2017-01-01

    This paper presents an approach using range measurement through homography calculation to build 2D visual occupancy grid and control the robot through monocular vision. This approach is designed for a Cellbot architecture. The robot is equipped with wall following behavior to explore the environment, which enables the robot to trail objects contours, residing in the fuzzy control the responsibility to provide commands for the correct execution of the robot movements while facing the advers...

  14. Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model

    Science.gov (United States)

    2011-08-01

    figure and ground the luminance cue breaks down and gestalt contours can fail to pop out. In this case we rely on color, which, having weak stereopsis...REPORT Generalization of Figure - Ground Segmentation from Monocular to Binocular Vision in an Embodied Biological Brain Model 14. ABSTRACT 16. SECURITY...U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS figure - ground , neural network, object

  15. Temporal visual field defects are associated with monocular inattention in chiasmal pathology.

    Science.gov (United States)

    Fledelius, Hans C

    2009-11-01

    Chiasmal lesions have been shown to give rise occasionally to uni-ocular temporal inattention, which cannot be compensated for by volitional eye movement. This article describes the assessments of 46 such patients with chiasmal pathology. It aims to determine the clinical spectrum of this disorder, including interference with reading. Retrospective consecutive observational clinical case study over a 7-year period comprising 46 patients with chiasmal field loss of varying degrees. Observation of reading behaviour during monocular visual acuity testing ascertained from consecutive patients who appeared unable to read optotypes on the temporal side of the chart. Visual fields were evaluated by kinetic (Goldmann) and static (Octopus) techniques. Five patients who clearly manifested this condition are presented in more detail. The results of visual field testing were related to absence or presence of uni-ocular visual inattentive behaviour for distance visual acuity testing and/or reading printed text. Despite normal eye movements, the 46 patients making up the clinical series perceived only optotypes in the nasal part of the chart, in one eye or in both, when tested for each eye in turn. The temporal optotypes were ignored, and this behaviour persisted despite instruction to search for any additional letters temporal to those, which had been seen. This phenomenon of unilateral visual inattention held for both eyes in 18 and was unilateral in the remaining 28 patients. Partial or full reversibility after treatment was recorded in 21 of the 39 for whom reliable follow-up data were available. Reading a text was affected in 24 individuals, and permanently so in six. A neglect-like spatial unawareness and a lack of cognitive compensation for varying degrees of temporal visual field loss were present in all the patients observed. Not only is visual field loss a feature of chiasmal pathology, but the higher visual function of affording attention within the temporal visual

  16. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    Science.gov (United States)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  17. Interactive and Stereoscopic Hybrid 3D Viewer of Radar Data with Gesture Recognition

    Science.gov (United States)

    Goenetxea, Jon; Moreno, Aitor; Unzueta, Luis; Galdós, Andoni; Segura, Álvaro

    This work presents an interactive and stereoscopic 3D viewer of weather information coming from a Doppler radar. The hybrid system shows a GIS model of the regional zone where the radar is located and the corresponding reconstructed 3D volume weather data. To enhance the immersiveness of the navigation, stereoscopic visualization has been added to the viewer, using a polarized glasses based system. The user can interact with the 3D virtual world using a Nintendo Wiimote for navigating through it and a Nintendo Wii Nunchuk for giving commands by means of hand gestures. We also present a dynamic gesture recognition procedure that measures the temporal advance of the performed gesture postures. Experimental results show how dynamic gestures are effectively recognized so that a more natural interaction and immersive navigation in the virtual world is achieved.

  18. Distortion of depth perception in virtual environments using stereoscopic displays: quantitative assessment and corrective measures

    Science.gov (United States)

    Kleiber, Michael; Winkelholz, Carsten

    2008-02-01

    The aim of the presented research was to quantify the distortion of depth perception when using stereoscopic displays. The visualization parameters of the used virtual reality system such as perspective, haploscopic separation and width of stereoscopic separation were varied. The experiment was designed to measure distortion in depth perception according to allocentric frames of reference. The results of the experiments indicate that some of the parameters have an antithetic effect which allows to compensate the distortion of depth perception for a range of depths. In contrast to earlier research which reported underestimation of depth perception we found that depth was overestimated when using true projection parameters according to the position of the eyes of the user and display geometry.

  19. The Effect of Stereoscopic ("3D") vs. 2D Presentation on Learning through Video and Film

    Science.gov (United States)

    Price, Aaron; Kasal, E.

    2014-01-01

    Two Eyes, 3D is a NSF-funded research project into the effects of stereoscopy on learning of highly spatial concepts. We report final results on one study of the project which tested the effect of stereoscopic presentation on learning outcomes of two short films about Type 1a supernovae and the morphology of the Milky Way. 986 adults watched either film, randomly distributed between stereoscopic and 2D presentation. They took a pre-test and post-test that included multiple choice and drawing tasks related to the spatial nature of the topics in the film. Orientation of the answering device was also tracked and a spatial cognition pre-test was given to control for prior spatial ability. Data collection took place at the Adler Planetarium's Space Visualization Lab and the project is run through the AAVSO.

  20. Atomic structure of Fe thin-films on Cu(0 0 1) studied with stereoscopic photography

    International Nuclear Information System (INIS)

    Hattori, Azusa N.; Fujikado, M.; Uchida, T.; Okamoto, S.; Fukumoto, K.; Guo, F.Z.; Matsui, F.; Nakatani, K.; Matsushita, T.; Hattori, K.; Daimon, H.

    2004-01-01

    The complex magnetic properties of Fe films epitaxially grown on Cu(0 0 1) have been discussed in relation to their atomic structure. We have studied the Fe films on Cu(0 0 1) by a new direct method for three-dimensional (3D) atomic structure analysis, so-called 'stereoscopic photography'. The forward-focusing peaks in the photoelectron angular distribution pattern excited by the circularly polarized light rotate around the light axis in either clockwise or counterclockwise direction depending on the light helicity. By using a display-type spherical mirror analyzer for this phenomenon, we can obtain stereoscopic photographs of atomic structure. The photographs revealed that the iron structure changes from bcc to fcc and almost bcc structure with increasing iron film thickness

  1. Synaptic Mechanisms of Activity-Dependent Remodeling in Visual Cortex during Monocular Deprivation

    Directory of Open Access Journals (Sweden)

    Cynthia D. Rittenhouse

    2009-01-01

    Full Text Available It has long been appreciated that in the visual cortex, particularly within a postnatal critical period for experience-dependent plasticity, the closure of one eye results in a shift in the responsiveness of cortical cells toward the experienced eye. While the functional aspects of this ocular dominance shift have been studied for many decades, their cortical substrates and synaptic mechanisms remain elusive. Nonetheless, it is becoming increasingly clear that ocular dominance plasticity is a complex phenomenon that appears to have an early and a late component. Early during monocular deprivation, deprived eye cortical synapses depress, while later during the deprivation open eye synapses potentiate. Here we review current literature on the cortical mechanisms of activity-dependent plasticity in the visual system during the critical period. These studies shed light on the role of activity in shaping neuronal structure and function in general and can lead to insights regarding how learning is acquired and maintained at the neuronal level during normal and pathological brain development.

  2. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  3. Optimization of dynamic envelope measurement system for high speed train based on monocular vision

    Science.gov (United States)

    Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong

    2018-01-01

    The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.

  4. [Effect of acupuncture on pattern-visual evoked potential in rats with monocular visual deprivation].

    Science.gov (United States)

    Yan, Xing-Ke; Dong, Li-Li; Liu, An-Guo; Wang, Jun-Yan; Ma, Chong-Bing; Zhu, Tian-Tian

    2013-08-01

    To explore electrophysiology mechanism of acupuncture for treatment and prevention of visual deprivation effect. Eighteen healthy 15-day Evans rats were randomly divided into a normal group, a model group and an acupuncture group, 6 rats in each one. Deprivation amblyopia model was established by monocular eyelid suture in the model group and acupuncture group. Acupuncture was applied at "Jingming" (BL 1), "Chengqi" (ST 1), "Qiuhou" (EX-HN 7) and "Cuanzhu" (BL 2) in the acupuncture group. The bilateral acupoints were selected alternately, one side for a day, and totally 14 days were required. The effect of acupuncture on visual evoked potential in different spatial frequencies was observed. Under three different kinds of spatial frequencies of 2 X 2, 4 X 4 and 8 X 8, compared with normal group, there was obvious visual deprivation effect in the model group where P1 peak latency was delayed (P0.05). Under spatial frequency of 4 X 4, N1-P1 amplitude value was maximum in the normal group and acupuncture group. With this spatial frequency the rat's eye had best resolving ability, indicating it could be the best spatial frequency for rat visual system. The visual system has obvious electrophysiology plasticity in sensitive period. Acupuncture treatment could adjust visual deprivation-induced suppression and slow of visual response in order to antagonism deprivation effect.

  5. Monocular-Based 6-Degree of Freedom Pose Estimation Technology for Robotic Intelligent Grasping Systems

    Directory of Open Access Journals (Sweden)

    Tao Liu

    2017-02-01

    Full Text Available Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments.

  6. Dynamic stereoscopic selective visual attention (dssva): integrating motion and shape with depth in video segmentation

    OpenAIRE

    López Bonal, María Teresa; Fernández Caballero, Antonio; Saiz Valverde, Sergio

    2008-01-01

    Depth inclusion as an important parameter for dynamic selective visual attention is presented in this article. The model introduced in this paper is based on two previously developed models, dynamic selective visual attention and visual stereoscopy, giving rise to the so-called dynamic stereoscopic selective visual attention method. The three models are based on the accumulative computation problem-solving method. This paper shows how software reusability enables enhancing results in vision r...

  7. Monoscopic versus stereoscopic photography in screening for clinically significant macular edema.

    Science.gov (United States)

    Welty, Christopher J; Agarwal, Anita; Merin, Lawrence M; Chomsky, Amy

    2006-01-01

    The purpose of the study was to determine whether monoscopic photography could serve as an accurate tool when used to screen for clinically significant macular edema. In a masked randomized fashion, two readers evaluated monoscopic and stereoscopic retinal photographs of 100 eyes. The photographs were evaluated first individually for probable clinically significant macular edema based on the Early Treatment Diabetic Retinopathy Study criteria and then as stereoscopic pairs. Graders were evaluated for sensitivity and specificity individually and in combination. Individually, reader one had a sensitivity of 0.93 and a specificity of 0.77, and reader two had a sensitivity of 0.88 and a specificity of 0.94. In combination, the readers had a sensitivity of 0.91 and a specificity of 0.86. They correlated on 0.76 of the stereoscopic readings and 0.92 of the monoscopic readings. These results indicate that the use of monoscopic retinal photography may be an accurate screening tool for clinically significant macular edema.

  8. Stereoscopic and photometric surface reconstruction in scanning electron microscopy

    International Nuclear Information System (INIS)

    Scherer, S.

    2000-01-01

    The scanning electron microscope (SEM) is one of the most important devices to examine microscopic structures as it offers images of a high contrast range with a large depth of focus. Nevertheless, three-dimensional measurements, as desired in fracture mechanics, have previously not been accomplished. This work presents a system for automatic, robust and dense surface reconstruction in scanning electron microscopy combining new approaches in shape from stereo and shape from photometric stereo. The basic theoretical assumption for a known adaptive window algorithm is shown not to hold in scanning electron microscopy. A constraint derived from this observation yields a new, simplified, hence faster calculation of the adaptive window. The correlation measure itself is obtained by a new ordinal measure coefficient. Shape from photometric stereo in the SEM is formulated by relating the image formation process with conventional photography. An iterative photometric ratio reconstruction is invented based on photometric ratios of backscatter electron images. The performance of the proposed system is evaluated using ground truth data obtained by three alternative shape recovery devices. Most experiments showed relative height accuracy within the tolerances of the alternative devices. (author)

  9. Stereoscopic construction and practice of optoelectronic technology textbook

    Science.gov (United States)

    Zhou, Zigang; Zhang, Jinlong; Wang, Huili; Yang, Yongjia; Han, Yanling

    2017-08-01

    It is a professional degree course textbook for the Nation-class Specialty—Optoelectronic Information Science and Engineering, and it is also an engineering practice textbook for the cultivation of photoelectric excellent engineers. The book seeks to comprehensively introduce the theoretical and applied basis of optoelectronic technology, and it's closely linked to the current development of optoelectronic industry frontier and made up of following core contents, including the laser source, the light's transmission, modulation, detection, imaging and display. At the same time, it also embodies the features of the source of laser, the transmission of the waveguide, the electronic means and the optical processing methods.

  10. Effects of extraocular muscle surgery in children with monocular blindness and bilateral nystagmus.

    Science.gov (United States)

    Sturm, Veit; Hejcmanova, Marketa; Landau, Klara

    2014-11-20

    Monocular infantile blindness may be associated with bilateral horizontal nystagmus, a subtype of fusion maldevelopment nystagmus syndrome (FMNS). Patients often adopt a significant anomalous head posture (AHP) towards the fixing eye in order to dampen the nystagmus. This clinical entity has also been reported as unilateral Ciancia syndrome. The aim of the study was to ascertain the clinical features and surgical outcome of patients with FMNS with infantile unilateral visual loss. In this retrospective case series, nine consecutive patients with FMNS with infantile unilateral visual loss underwent strabismus surgery to correct an AHP and/or improve ocular alignment. Outcome measures included amount of AHP and deviation at last follow-up. Eye muscle surgery according to the principles of Kestenbaum resulted in a marked reduction or elimination of the AHP. On average, a reduction of AHP of 1.3°/mm was achieved by predominantly performing combined horizontal recess-resect surgery in the intact eye. In cases of existing esotropia (ET) this procedure also markedly reduced the angle of deviation. A dosage calculation of 3 prism diopters/mm was established. We advocate a tailored surgical approach in FMNS with infantile unilateral visual loss. In typical patients who adopt a significant AHP accompanied by a large ET, we suggest an initial combined recess-resect surgery in the intact eye. This procedure regularly led to a marked reduction of the head turn and ET. In patients without significant strabismus, a full Kestenbaum procedure was successful, while ET in a patient with a minor AHP was corrected by performing a bimedial recession.

  11. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss

    Science.gov (United States)

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960’s on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research. PMID:25972788

  12. Monocular and binocular development in children with albinism, infantile nystagmus syndrome, and normal vision.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke

    2013-12-01

    To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Interocular acuity differences and binocular summation ratios were compared between groups. Crowding ratios were calculated by dividing the single Landolt C decimal acuity with the crowded Landolt C decimal acuity mono- and binocularly. A linear regression analysis was conducted to investigate the contribution of 5 predictors to the monocular and binocular crowding ratio: nystagmus amplitude, nystagmus frequency, strabismus, astigmatism, and anisometropia. Crowding ratios were higher under mono- and binocular viewing conditions for children with infantile nystagmus syndrome than for children with normal vision. Children with albinism showed higher crowding ratios in their poorer eye and under binocular viewing conditions than children with normal vision. Children with albinism and children with infantile nystagmus syndrome showed larger interocular acuity differences than children with normal vision (0.1 logMAR in our clinical groups and 0.0 logMAR in children with normal vision). Binocular summation ratios did not differ between groups. Strabismus and nystagmus amplitude predicted the crowding ratio in the poorer eye (p = 0.015 and p = 0.005, respectively). The crowding ratio in the better eye showed a marginally significant relation with nystagmus frequency and depth of anisometropia (p = 0.082 and p = 0.070, respectively). The binocular crowding ratio was not predicted by any of the variables. Children with albinism and children with infantile nystagmus syndrome show larger interocular acuity differences than children with normal vision. Strabismus and nystagmus amplitude are significant predictors of the crowding ratio in the poorer eye.

  13. Indoor calibration for stereoscopic camera STC: a new method

    Science.gov (United States)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2017-11-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir

  14. Stereoscopic measurements of particle dispersion in microgravity turbulent flow

    Science.gov (United States)

    Groszmann, Daniel Eduardo

    2001-08-01

    The presence of particles in turbulent flows adds complexity to an already difficult subject. The work described in this research dissertation was intended to characterize the effects of inertia, isolated from gravity, on the dispersion of solid particles in a turbulent air flow. The experiment consisted of releasing particles of various sizes in an enclosed box of fan- generated, homogenous, isotropic, and stationary turbulent airflow and examining the particle behavior in a microgravity environment. The turbulence box was characterized in ground-based experiments using laser Doppler velocimetry techniques. Microgravity was established by free-floating the experiment apparatus during the parabolic trajectory of NASA's KC-135 reduced gravity aircraft. The microgravity generally lasted about 20 seconds, with about fifty parabolas per flight and one flight per day over a testing period of four days. To cover a broad range of flow regimes of interest, particles with Stokes numbers (St) of 1 to 300 were released in the turbulence box. The three- dimensional measurements of particle motion were made using a three-camera stereo imaging system with a particle-tracking algorithm. Digital photogrammetric techniques were used to determine the particle locations in three-dimensional space from the calibrated camera images. The epipolar geometry constraint was used to identify matching particles from the three different views and a direct spatial intersection scheme determined the coordinates of particles in three-dimensional space. Using velocity and acceleration constraints, particles in a sequence of frames were matched resulting in particle tracks and dispersion measurements. The goal was to compare the dispersion of different Stokes number particles in zero gravity and decouple the effects of inertia and gravity on the dispersion. Results show that higher inertia particles disperse less in zero gravity, in agreement with current models. Particles with St ~ 200

  15. The Effect of Two-dimensional and Stereoscopic Presentation on Middle School Students' Performance of Spatial Cognition Tasks

    Science.gov (United States)

    Price, Aaron; Lee, Hee-Sun

    2010-02-01

    We investigated whether and how student performance on three types of spatial cognition tasks differs when worked with two-dimensional or stereoscopic representations. We recruited nineteen middle school students visiting a planetarium in a large Midwestern American city and analyzed their performance on a series of spatial cognition tasks in terms of response accuracy and task completion time. Results show that response accuracy did not differ between the two types of representations while task completion time was significantly greater with the stereoscopic representations. The completion time increased as the number of mental manipulations of 3D objects increased in the tasks. Post-interviews provide evidence that some students continued to think of stereoscopic representations as two-dimensional. Based on cognitive load and cue theories, we interpret that, in the absence of pictorial depth cues, students may need more time to be familiar with stereoscopic representations for optimal performance. In light of these results, we discuss potential uses of stereoscopic representations for science learning.

  16. Computer-enhanced stereoscopic vision in a head-mounted operating binocular

    International Nuclear Information System (INIS)

    Birkfellner, Wolfgang; Figl, Michael; Matula, Christian; Hummel, Johann; Hanel, Rudolf; Imhof, Herwig; Wanschitz, Felix; Wagner, Arne; Watzinger, Franz; Bergmann, Helmar

    2003-01-01

    Based on the Varioscope, a commercially available head-mounted operating binocular, we have developed the Varioscope AR, a see through head-mounted display (HMD) for augmented reality visualization that seamlessly fits into the infrastructure of a surgical navigation system. We have assessed the extent to which stereoscopic visualization improves target localization in computer-aided surgery in a phantom study. In order to quantify the depth perception of a user aiming at a given target, we have designed a phantom simulating typical clinical situations in skull base surgery. Sixteen steel spheres were fixed at the base of a bony skull, and several typical craniotomies were applied. After having taken CT scans, the skull was filled with opaque jelly in order to simulate brain tissue. The positions of the spheres were registered using VISIT, a system for computer-aided surgical navigation. Then attempts were made to locate the steel spheres with a bayonet probe through the craniotomies using VISIT and the Varioscope AR as a stereoscopic display device. Localization of targets 4 mm in diameter using stereoscopic vision and additional visual cues indicating target proximity had a success rate (defined as a first-trial hit rate) of 87.5%. Using monoscopic vision and target proximity indication, the success rate was found to be 66.6%. Omission of visual hints on reaching a target yielded a success rate of 79.2% in the stereo case and 56.25% with monoscopic vision. Time requirements for localizing all 16 targets ranged from 7.5 min (stereo, with proximity cues) to 10 min (mono, without proximity cues). Navigation error is primarily governed by the accuracy of registration in the navigation system, whereas the HMD does not appear to influence localization significantly. We conclude that stereo vision is a valuable tool in augmented reality guided interventions. (note)

  17. System design description for the LDUA high resolution stereoscopic video camera system (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS), system 6230, was designed to be used as an end effector on the LDUA to perform surveillance and inspection activities within a waste tank. It is attached to the LDUA by means of a Tool Interface Plate (TIP) which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate. Designed to perform up close weld and corrosion inspection roles in US T operations, the HRSVS will support and supplement the Light Duty Utility Arm (LDUA) and provide the crucial inspection tasks needed to ascertain waste tank condition

  18. Immersive Televisual Environments: Spectatorship, Stereoscopic Vision and the Failure of 3DTV

    Directory of Open Access Journals (Sweden)

    Ilkin Mehrabov

    2015-09-01

    Full Text Available This article focuses on one of the most ground-breaking technological attempts in creating novel immersive media environments for heightened televisual user experiences: 3DTV, a Network of Excellence funded by the European Commission 6th Framework Information Society Technologies Programme. Based on the theoretical framework outlined by the works of Jonathan Crary and Brian Winston, and on empirical data obtained from author’s fieldwork and laboratory visit notes, as well as discussions with practitioners, the article explores the history of stereoscopic vision and technological progress related with it, and looks for possible reasons of 3DTV’s dramatic commercial failure.

  19. Teaching-learning: stereoscopic 3D versus Traditional methods in Mexico City.

    Science.gov (United States)

    Mendoza Oropeza, Laura; Ortiz Sánchez, Ricardo; Ojeda Villagómez, Raúl

    2015-01-01

    In the UNAM Faculty of Odontology, we use a stereoscopic 3D teaching method that has grown more common in the last year, which makes it important to know whether students can learn better with this strategy. The objective of the study is to know, if the 4th year students of the bachelor's degree in dentistry learn more effectively with the use of stereoscopic 3D than the traditional method in Orthodontics. first, we selected the course topics, to be used for both methods; the traditional method using projection of slides and for the stereoscopic third dimension, with the use of videos in digital stereo projection (seen through "passive" polarized 3D glasses). The main topic was supernumerary teeth, including and diverted from their guide eruption. Afterwards we performed an exam on students, containing 24 items, validated by expert judgment in Orthodontics teaching. The results of the data were compared between the two educational methods for determined effectiveness using the model before and after measurement with the statistical package SPSS 20 version. The results presented for the 9 groups of undergraduates in dentistry, were collected with a total of 218 students for 3D and traditional methods, we found in a traditional method a mean 4.91, SD 1.4752 in the pretest and X=6.96, SD 1.26622, St Error 0.12318 for the posttest. The 3D method had a mean 5.21, SD 1.996779 St Error 0.193036 for the pretest X= 7.82, SD =0.963963, St Error 0.09319 posttest; the analysis of Variance between groups F= 5.60 Prob > 0.0000 and Bartlett's test for equal variances 21.0640 Prob > chi2 = 0.007. These results show that the student's learning in 3D means a significant improvement as compared to the traditional teaching method and having a strong association between the two methods. The findings suggest that the stereoscopic 3D method lead to improved student learning compared to traditional teaching.

  20. Measurement of rotation and strain-rate tensors by using stereoscopic PIV

    DEFF Research Database (Denmark)

    Özcan, O.; Meyer, Knud Erik; Larsen, Poul Scheel

    2004-01-01

    A simple technique is described for measuring the mean rate-of-displacement (velocity gradient) tensor in a plane by using a conventional stereoscopic PIV system. The technique involves taking PIV data in two or three closely-spaced parallel planes at different times. All components of the mean...... are presented to show the applicability of the proposed technique. The PIV cameras and light sheet optics shown in Fig. 1a are mounted on the same traverse mechanism in order to displace the measurement plane accurately. Data obtained in constant-y and -z planes are presented. Fig. 1b shows a contour plot...

  1. Evaluation of stereoscopic video cameras synchronized with the movement of an operator's head on the teleoperation of the actual backhoe shovel

    Science.gov (United States)

    Minamoto, Masahiko; Matsunaga, Katsuya

    1999-05-01

    Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.

  2. Differences in Optical Coherence Tomography Assessment of Bruch Membrane Opening Compared to Stereoscopic Photography for Estimating Cup-to-Disc Ratio.

    Science.gov (United States)

    Mwanza, Jean-Claude; Huang, Linda Y; Budenz, Donald L; Shi, Wei; Huang, Gintien; Lee, Richard K

    2017-12-01

    To compare the vertical and horizontal cup-to-disc ratio (VCDR, HCDR) by an updated optical coherence tomography (OCT) Bruch membrane opening (BMO) algorithm and stereoscopic optic disc photograph readings by glaucoma specialists. Reliability analysis. A total of 195 eyes (116 glaucoma and 79 glaucoma suspect) of 99 patients with stereoscopic photographs and OCT scans of the optic discs taken during the same visit were compared. Optic disc photographs were read by 2 masked glaucoma specialists for VCDR and HCDR estimation. Intraclass correlation coefficient (ICC) and Bland-Altman plots were used to assess the agreement between photograph reading and OCT in estimating CDR. OCT images computed significantly larger VCDR and HCDR than photograph reading before and after stratifying eyes based on disc size (P < .001). The difference in CDR estimates between the 2 methods was equal to or greater than 0.2 in 29% and 35% of the eyes for VCDR and HCDR, respectively, with a mean difference of 0.3 in each case. The ICCs between the readers and OCT ranged between 0.50 and 0.63. The size of disagreement in VCDR correlated weakly with cup area in eyes with medium (r 2  = 0.10, P = .008) and large (r 2  = 0.09, P = .007) discs. OCT and photograph reading by clinicians agree poorly in CDR assessment. The difference in VCDR between the 2 methods was depended on cup area in medium and large discs. These differences should be considered when making conclusions regarding CDRs in clinical practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Broadcast-quality-stereoscopic video in a time-critical entertainment and corporate environment

    Science.gov (United States)

    Gay, Jean-Philippe

    1995-03-01

    `reality present: Peter Gabrial and Cirque du Soleil' is a 12 minute original work directed and produced by Doug Brown, Jean-Philippe Gay & A. Coogan, which showcases creative content applications of commercial stereoscopic video equipment. For production, a complete equipment package including a Steadicam mount was used in support of the Ikegami LK-33 camera. Remote production units were fielded in the time critical, on-stage and off-stage environments of 2 major live concerts: Peter Gabriel's Secret World performance at the San Diego Sports Arena, and Cirque du Soleil's Saltimbanco performance in Chicago. Twin 60 Hz video channels were captured on Beta SP for maximum post production flexibility. Digital post production and field sequential mastering were effected in D-2 format at studio facilities in Los Angeles. The program was world premiered to a large public at the World of Music, Arts and Dance festivals in Los Angeles and San Francisco, in late 1993. It was presented to the artists in Los Angeles, Montreal and Washington D.C. Additional presentations have been made using a broad range of commercial and experimental stereoscopic video equipment, including projection systems, LCD and passive eyewear, and digital signal processors. Technical packages for live presentation have been fielded on site and off, through to the present.

  4. Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization

    Science.gov (United States)

    Johnston, Semay; Renambot, Luc; Sauter, Daniel

    2013-03-01

    Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.

  5. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    Science.gov (United States)

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Flow analysis of vortex generators on wing sections by stereoscopic particle image velocimetry measurements

    DEFF Research Database (Denmark)

    Velte, Clara Marika; Hansen, Martin Otto Laver; Cavar, Dalibor

    2008-01-01

    a wind turbine blade. The low Reynolds number is chosen on the basis that this is a fundamental investigation of the structures of the flow induced by vortex generators and the fact that one obtains a thicker boundary layer and larger structures evoked by the actuating devices, which are easier...... generators are applied. The idea behind the experiments is that the results will be offered for validation of modeling of the effect of vortex generators using various numerical codes. Initial large eddy simulation (LES) computations have been performed that show the same qualitative behaviour...

  7. Three-Dimensional Dynamic Deformation Measurements Using Stereoscopic Imaging and Digital Speckle Photography

    International Nuclear Information System (INIS)

    Prentice, H. J.; Proud, W. G.

    2006-01-01

    A technique has been developed to determine experimentally the three-dimensional displacement field on the rear surface of a dynamically deforming plate. The technique combines speckle analysis with stereoscopy, using a modified angular-lens method: this incorporates split-frame photography and a simple method by which the effective lens separation can be adjusted and calibrated in situ. Whilst several analytical models exist to predict deformation in extended or semi-infinite targets, the non-trivial nature of the wave interactions complicates the generation and development of analytical models for targets of finite depth. By interrogating specimens experimentally to acquire three-dimensional strain data points, both analytical and numerical model predictions can be verified more rigorously. The technique is applied to the quasi-static deformation of a rubber sheet and dynamically to Mild Steel sheets of various thicknesses

  8. Digital image transformation and rectification of spacecraft and radar images

    Science.gov (United States)

    Wu, S. S. C.

    1985-01-01

    The application of digital processing techniques to spacecraft television pictures and radar images is discussed. The use of digital rectification to produce contour maps from spacecraft pictures is described; images with azimuth and elevation angles are converted into point-perspective frame pictures. The digital correction of the slant angle of radar images to ground scale is examined. The development of orthophoto and stereoscopic shaded relief maps from digital terrain and digital image data is analyzed. Digital image transformations and rectifications are utilized on Viking Orbiter and Lander pictures of Mars.

  9. The Influence of Manifest Strabismus and Stereoscopic Vision on Non-Verbal Abilities of Visually Impaired Children

    Science.gov (United States)

    Gligorovic, Milica; Vucinic, Vesna; Eskirovic, Branka; Jablan, Branka

    2011-01-01

    This research was conducted in order to examine the influence of manifest strabismus and stereoscopic vision on non-verbal abilities of visually impaired children aged between 7 and 15. The sample included 55 visually impaired children from the 1st to the 6th grade of elementary schools for visually impaired children in Belgrade. RANDOT stereotest…

  10. The effects of 5.1 sound presentations on the perception of stereoscopic imagery in video games

    Science.gov (United States)

    Cullen, Brian; Galperin, Daniel; Collins, Karen; Hogue, Andrew; Kapralos, Bill

    2013-03-01

    Stereoscopic 3D (S3D) content in games, film and other audio-visual media has been steadily increasing over the past number of years. However, there are still open, fundamental questions regarding its implementation, particularly as it relates to a multi-modal experience that involves sound and haptics. Research has shown that sound has considerable impact on our perception of 2D phenomena, but very little research has considered how sound may influence stereoscopic 3D. Here we present the results of an experiment that examined the effects of 5.1 surround sound (5.1) and stereo loudspeaker setups on depth perception in relation to S3D imagery within a video game environment. Our aim was to answer the question: "can 5.1 surround sound enhance the participant's perception of depth in the stereoscopic field when compared to traditional stereo sound presentations?" In addition, our study examined how the presence or absence of Doppler frequency shift and frequency fall-off audio effects can also influence depth judgment under these conditions. Results suggest that 5.1 surround sound presentations enhance the apparent depth of stereoscopic imagery when compared to stereo presentations. Results also suggest that the addition of audio effects such as Doppler shift and frequency fall-off filters can influence the apparent depth of S3D objects.

  11. Quantitative Measurement of Eyestrain on 3D Stereoscopic Display Considering the Eye Foveation Model and Edge Information

    Directory of Open Access Journals (Sweden)

    Hwan Heo

    2014-05-01

    Full Text Available We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user’s gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD, stereoscopic disparity (SD, frame cancellation effect (FCE, and edge component (EC of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.

  12. Comparing Short- and Long-Term Learning Effects between Stereoscopic and Two-Dimensional Film at a Planetarium

    Science.gov (United States)

    Price, C. Aaron; Lee, Hee-Sun; Subbarao, Mark; Kasal, Evan; Aguileara, Julieta

    2015-01-01

    Science centers such as museums and planetariums have used stereoscopic ("three-dimensional") films to draw interest from and educate their visitors for decades. Despite the fact that most adults who are finished with their formal education get their science knowledge from such free-choice learning settings very little is known about the…

  13. Evaluation of the Performance of Vortex Generators on the DU 91-W2-250 Profile using Stereoscopic PIV

    DEFF Research Database (Denmark)

    Velte, Clara Marika; Hansen, Martin Otto Laver; Meyer, Knud Erik

    2009-01-01

    Stereoscopic PIV measurements investigating the effect of Vortex Generators on the lift force near stall and on glide ratio at best aerodynamic performance have been carried out in the LM Glasfiber wind tunnel on a DU 91-W2-250 profile. Measurements at two Reynolds numbers were analyzed; Re=0...

  14. Evaluation of the Performance of Vortex Generators on the DU 91-W2-250 Profile using Stereoscopic PIV

    DEFF Research Database (Denmark)

    Velte, Clara Marika; Hansen, Martin Otto Laver; Meyer, Knud Erik

    2008-01-01

    Stereoscopic PIV measurements investigating the effect of Vortex Generators on the lift force near stall and on glide ratio at best aerodynamic performance have been carried out in the LM Glasfiber wind tunnel on a DU 91-W2-250 profile. Measurements at two Reynolds numbers were analyzed; Re=0...

  15. The future of three-dimensional medical imaging

    International Nuclear Information System (INIS)

    Peter, T.M.

    1996-01-01

    The past 15 years have witnessed an explosion in medical imaging technology, and none more so than in the tomographic imaging modalities of CT and MRI. Prior to 1975, 3-D imaging was largely performed in the minds of radiologists and surgeons, assisted by the modalities of conventional x-ray tomography and stereoscopic radiography. However today, with the advent of imaging techniques which ower their existence to computer technology, three-dimensional image acquisition is fast becoming the norm and the clinician finally has access to sets of data that represent the entire imaged volume. Stereoscopic image visualization has already begun to reappear as a viable means of visualizing 3 D medical images. The future of 3-D imaging is exciting and will undoubtedly move further in the direction of virtual reality. (author)

  16. Video stereopsis of cardiac MR images

    International Nuclear Information System (INIS)

    Johnson, R.F. Jr.; Norman, C.

    1988-01-01

    This paper describes MR images of the heart acquired using a spin-echo technique synchronized to the electrocardiogram. Sixteen 0.5-cm-thick sections with a 0.1-cm gap between each section were acquired in the coronal view to cover all the cardiac anatomy including vasculature. Two sets of images were obtained with a subject rotation corresponding to the stereoscopic viewing angle of the eyes. The images were digitized, spatially registered, and processed by a three-dimensional graphics work station for stereoscopic viewing. Video recordings were made of each set of images and then temporally synchronized to produce a single video image corresponding to the appropriate eye view

  17. Measuring Algorithm for the Distance to a Preceding Vehicle on Curve Road Using On-Board Monocular Camera

    Science.gov (United States)

    Yu, Guizhen; Zhou, Bin; Wang, Yunpeng; Wun, Xinkai; Wang, Pengcheng

    2015-12-01

    Due to more severe challenges of traffic safety problems, the Advanced Driver Assistance Systems (ADAS) has received widespread attention. Measuring the distance to a preceding vehicle is important for ADAS. However, the existing algorithm focuses more on straight road sections than on curve measurements. In this paper, we present a novel measuring algorithm for the distance to a preceding vehicle on a curve road using on-board monocular camera. Firstly, the characteristics of driving on the curve road is analyzed and the recognition of the preceding vehicle road area is proposed. Then, the vehicle detection and distance measuring algorithms are investigated. We have verified these algorithms on real road driving. The experimental results show that this method proposed in the paper can detect the preceding vehicle on curve roads and accurately calculate the longitudinal distance and horizontal distance to the preceding vehicle.

  18. Effects of brief daily periods of unrestricted vision during early monocular form deprivation on development of visual area 2.

    Science.gov (United States)

    Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Harwerth, Ronald S; Smith, Earl L; Chino, Yuzo M

    2011-09-14

    Providing brief daily periods of unrestricted vision during early monocular form deprivation reduces the depth of amblyopia. To gain insights into the neural basis of the beneficial effects of this treatment, the binocular and monocular response properties of neurons were quantitatively analyzed in visual area 2 (V2) of form-deprived macaque monkeys. Beginning at 3 weeks of age, infant monkeys were deprived of clear vision in one eye for 12 hours every day until 21 weeks of age. They received daily periods of unrestricted vision for 0, 1, 2, or 4 hours during the form-deprivation period. After behavioral testing to measure the depth of the resulting amblyopia, microelectrode-recording experiments were conducted in V2. The ocular dominance imbalance away from the affected eye was reduced in the experimental monkeys and was generally proportional to the reduction in the depth of amblyopia in individual monkeys. There were no interocular differences in the spatial properties of V2 neurons in any subject group. However, the binocular disparity sensitivity of V2 neurons was significantly higher and binocular suppression was lower in monkeys that had unrestricted vision. The decrease in ocular dominance imbalance in V2 was the neuronal change most closely associated with the observed reduction in the depth of amblyopia. The results suggest that the degree to which extrastriate neurons can maintain functional connections with the deprived eye (i.e., reducing undersampling for the affected eye) is the most significant factor associated with the beneficial effects of brief periods of unrestricted vision.

  19. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment.

    Science.gov (United States)

    Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M

    2016-01-26

    Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal

  20. Streaming video-based 3D reconstruction method compatible with existing monoscopic and stereoscopic endoscopy systems

    Science.gov (United States)

    Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul

    2012-06-01

    Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.

  1. Stereoscopic visualization in curved spacetime: seeing deep inside a black hole

    International Nuclear Information System (INIS)

    Hamilton, Andrew J S; Polhemus, Gavin

    2010-01-01

    Stereoscopic visualization adds an additional dimension to the viewer's experience, giving them a sense of distance. In a general relativistic visualization, distance can be measured in a variety of ways. We argue that the affine distance, which matches the usual notion of distance in flat spacetime, is a natural distance to use in curved spacetime. As an example, we apply affine distance to the visualization of the interior of a black hole. Affine distance is not the distance perceived with normal binocular vision in curved spacetime. However, the failure of binocular vision is simply a limitation of animals that have evolved in flat spacetime, not a fundamental obstacle to depth perception in curved spacetime. Trinocular vision would provide superior depth perception.

  2. Stereoscopic neuroanatomy lectures using a three-dimensional virtual reality environment.

    Science.gov (United States)

    Kockro, Ralf A; Amaxopoulou, Christina; Killeen, Tim; Wagner, Wolfgang; Reisch, Robert; Schwandt, Eike; Gutenberg, Angelika; Giese, Alf; Stofft, Eckart; Stadie, Axel T

    2015-09-01

    Three-dimensional (3D) computer graphics are increasingly used to supplement the teaching of anatomy. While most systems consist of a program which produces 3D renderings on a workstation with a standard screen, the Dextrobeam virtual reality VR environment allows the presentation of spatial neuroanatomical models to larger groups of students through a stereoscopic projection system. Second-year medical students (n=169) were randomly allocated to receive a standardised pre-recorded audio lecture detailing the anatomy of the third ventricle accompanied by either a two-dimensional (2D) PowerPoint presentation (n=80) or a 3D animated tour of the third ventricle with the DextroBeam. Students completed a 10-question multiple-choice exam based on the content learned and a subjective evaluation of the teaching method immediately after the lecture. Students in the 2D group achieved a mean score of 5.19 (±2.12) compared to 5.45 (±2.16) in the 3D group, with the results in the 3D group statistically non-inferior to those of the 2D group (p<0.0001). The students rated the 3D method superior to 2D teaching in four domains (spatial understanding, application in future anatomy classes, effectiveness, enjoyableness) (p<0.01). Stereoscopically enhanced 3D lectures are valid methods of imparting neuroanatomical knowledge and are well received by students. More research is required to define and develop the role of large-group VR systems in modern neuroanatomy curricula. Copyright © 2015 Elsevier GmbH. All rights reserved.

  3. Remote stereoscopic video play platform for naked eyes based on the Android system

    Science.gov (United States)

    Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng

    2014-11-01

    As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.

  4. Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration.

    Science.gov (United States)

    Su, Li-Ming; Vagvolgyi, Balazs P; Agarwal, Rahul; Reiley, Carol E; Taylor, Russell H; Hager, Gregory D

    2009-04-01

    To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use.

  5. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    Science.gov (United States)

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-08-31

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.

  6. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Hyungjin Kim

    2015-08-01

    Full Text Available Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments

  7. Correspondence of line segments between two perpective images ...

    African Journals Online (AJOL)

    In order to permit the localization and the navigation of a mobile robot within an interior environment, we have built a stereoscopic sensor and implemented all the algorithms which allow to obtain 3D coordinates of real objects from data images. Sensor uses two mini cameras with vertical disposition. Processing on the ...

  8. Toward 3D-IPTV: design and implementation of a stereoscopic and multiple-perspective video streaming system

    Science.gov (United States)

    Petrovic, Goran; Farin, Dirk; de With, Peter H. N.

    2008-02-01

    3D-Video systems allow a user to perceive depth in the viewed scene and to display the scene from arbitrary viewpoints interactively and on-demand. This paper presents a prototype implementation of a 3D-video streaming system using an IP network. The architecture of our streaming system is layered, where each information layer conveys a single coded video signal or coded scene-description data. We demonstrate the benefits of a layered architecture with two examples: (a) stereoscopic video streaming, (b) monoscopic video streaming with remote multiple-perspective rendering. Our implementation experiments confirm that prototyping 3D-video streaming systems is possible with today's software and hardware. Furthermore, our current operational prototype demonstrates that highly heterogeneous clients can coexist in the system, ranging from auto-stereoscopic 3D displays to resource-constrained mobile devices.

  9. Three-dimensional location of target fish by monocular infrared imaging sensor based on a L-z correlation model

    Science.gov (United States)

    Lin, Kai; Zhou, Chao; Xu, Daming; Guo, Qiang; Yang, Xinting; Sun, Chuanheng

    2018-01-01

    Monitoring of fish behavior has drawn extensive attention in pharmacological research, water environmental assessment, bio-inspired robot design and aquaculture. Given that an infrared sensor is low cost, no illumination limitation and electromagnetic interference, interest in its use in behavior monitoring has grown considerably, especially in 3D trajectory monitoring to quantify fish behavior on the basis of near infrared absorption of water. However, precise position of vertical dimension (z) remains a challenge, which greatly impacts on infrared tracking system accuracy. Hence, an intensity (L) and coordinate (z) correlation model was proposed to overcome the limitation. In the modelling process, two cameras (top view and side view) were employed synchronously to identify the 3D coordinate of each fish (x-y and z, respectively), and the major challenges were the distortion caused by the perspective effect and the refraction at water boundaries. Therefore, a coordinate correction formulation was designed firstly for the calibration. Then the L-z correlation model was established based on Lambert's absorption law and statistical data analysis, and the model was estimated through monitoring 3D trajectories of four fishes during the day and night. Finally, variations of individuals and limits of the depth detection of the model were discussed. Compared with previous studies, the favorable prediction performance of the model is achieved for 3D trajectory monitoring, which could provide some inspirations for fish behavior monitoring, especially for nocturnal behavior study.

  10. Normative monocular visual acuity for early treatment diabetic retinopathy study charts in emmetropic children 5 to 12 years of age.

    Science.gov (United States)

    Dobson, Velma; Clifford-Donaldson, Candice E; Green, Tina K; Miller, Joseph M; Harvey, Erin M

    2009-07-01

    To provide normative data for children tested with Early Treatment Diabetic Retinopathy Study (ETDRS) charts. Cross-sectional study. A total of 252 Native American (Tohono O'odham) children aged 5 to 12 years. On the basis of cycloplegic refraction conducted on the day of testing, all were emmetropic (myopia < or =0.25 diopter [D] spherical equivalent, hyperopia < or =1.00 D spherical equivalent, and astigmatism < or =0.50 D in both eyes). Monocular visual acuity was tested at 4 m, using 1 ETDRS chart for the right eye (RE) and another for the left eye (LE). Visual acuity was scored as the total number of letters correctly identified, by naming or matching to letters on a lap card, and as the smallest letter size for which the child identified 3 of 5 letters correctly. Visual acuity results did not differ for the RE versus the LE, so data are reported for the RE only. Mean visual acuity for 5-year-olds (0.16 logarithm of the minimum angle of resolution [logMAR] [20/29]) was significantly worse than for 8-, 9-, 10-, 11-, and 12-year-olds (0.05 logMAR [20/22] or better at each age). The lower 95% prediction limit for determining whether a child has visual acuity within the normal range was 0.38 (20/48) for 5-year-olds and 0.30 (20/40) for 6- to 12-year-olds, which was reduced to 0.32 (20/42) for 5-year-olds and 0.21 (20/32) for 6- to 12-year-olds when recalculated with outlying data points removed. Mean interocular acuity difference did not vary by age, averaging less than 1 logMAR line at each age, with a lower 95% prediction limit of 0.17 log unit (1.7 logMAR lines) across all ages. For monocular visual acuity based on ETDRS charts to be in the normal range, it must be better than 20/50 for 5-year-olds and better than 20/40 for 6- to 12-year-olds. Normal interocular acuity difference includes values of less than 2 logMAR lines. Normative ETDRS visual acuity values are not as good as norms reported for adults, suggesting that a child's visual acuity results should

  11. Effect of Stereoscopic Anaglyphic 3-Dimensional Video Didactics on Learning Neuroanatomy.

    Science.gov (United States)

    Goodarzi, Amir; Monti, Sara; Lee, Darrin; Girgis, Fady

    2017-11-01

    The teaching of neuroanatomy in medical education has historically been based on didactic instruction, cadaveric dissections, and intraoperative experience for students. Multiple novel 3-dimensional (3D) modalities have recently emerged. Among these, stereoscopic anaglyphic video is easily accessible and affordable, however, its effects have not yet formally been investigated. This study aimed to investigate if 3D stereoscopic anaglyphic video instruction in neuroanatomy could improve learning for content-naive students, as compared with 2-dimensional (2D) video instruction. A single-site controlled prospective case control study was conducted at the School of Education. Content knowledge was assessed at baseline, followed by the presentation of an instructional neuroanatomy video. Participants viewed the video in either 2D or 3D format and then completed a written test of skull base neuroanatomy. Pretest and post-test performances were analyzed with independent Student's t-tests and analysis of covariance. Our study was completed by 249 subjects. At baseline, the 2D (n = 124, F = 97) and 3D groups (n = 125, F = 96) were similar, although the 3D group was older by 1.7 years (P = 0.0355) and the curricula of participating classes differed (P < 0.0001). Average scores for the 3D group were higher for both pretest (2D, M = 19.9%, standard deviation [SD] = 12.5% vs. 3D, M = 23.9%, SD = 14.9%, P = 0.0234) and post-test performances (2D, M = 68.5%, SD = 18.6% vs. 3D, M = 77.3%, SD = 18.8%, P = 0.003), but the magnitude of improvement across groups did not reach statistical significance (2D, M = 48.7%, SD = 21.3%, vs. 3D, M = 53.5%, SD = 22.7%, P = 0.0855). Incorporation of 3D video instruction into curricula without careful integration is insufficient to promote learning over 2D video. Published by Elsevier Inc.

  12. PLOT3D (version-5): a computer code for drawing three dimensional graphs, maps and histograms, in single or multiple colours, for mono or stereoscopic viewing

    International Nuclear Information System (INIS)

    Jayaswal, Balhans

    1987-01-01

    PLOT3D series of graphic codes (version 1 to 5) have been developed for drawing three dimensional graphs, maps, histograms and simple layout diagrams on monochrome or colour raster graphic terminal and plotter. Of these, PLOT3D Version-5 is an advanced code, equipped with several features that make it specially suitable for drawing 3D maps, multicolour 3D and contour graphs, and 3D layout diagrams, in axonometric or perspective projection. Prior to drawing, graphic parameters that define orientation, magnification, smoothening, shading, colour-map, etc. of the figure can be selected interactively by means of simple commands on the user terminal, or by reading those commands from an input data file. This code requires linking with any one of three supporting libraries: PLOT 10 TCS, PLOT 10 IGL, and CALCOMP, and the figure can be plotted in single colour, or displayed in single or multiple colours depending upon the type of library support and output device. Furthermore, this code can also be used to plot left and right eye view projections of 3D figure for composing a stereoscopic image from them with the aid of a viewer. 14 figures. (author)

  13. A comparison of low-cost monocular vision techniques for pothole distance estimation

    CSIR Research Space (South Africa)

    Nienaber, S

    2015-12-01

    Full Text Available measurement setup. Consequently, the camera was placed on a tripod at the exact height it would have been in the vehicle. The images used for this study were captured by a GoPro Hero 3+ camera with the resolution set to 3680 x 2760. The high resolution...

  14. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  15. Stereoscopic filming for investigating evasive side-stepping and anterior cruciate ligament injury risk

    Science.gov (United States)

    Lee, Marcus J. C.; Bourke, Paul; Alderson, Jacqueline A.; Lloyd, David G.; Lay, Brendan

    2010-02-01

    Non-contact anterior cruciate ligament (ACL) injuries are serious and debilitating, often resulting from the performance of evasive sides-stepping (Ssg) by team sport athletes. Previous laboratory based investigations of evasive Ssg have used generic visual stimuli to simulate realistic time and space constraints that athletes experience in the preparation and execution of the manoeuvre. However, the use of unrealistic visual stimuli to impose these constraints may not be accurately identifying the relationship between the perceptual demands and ACL loading during Ssg in actual game environments. We propose that stereoscopically filmed footage featuring sport specific opposing defender/s simulating a tackle on the viewer, when used as visual stimuli, could improve the ecological validity of laboratory based investigations of evasive Ssg. Due to the need for precision and not just the experience of viewing depth in these scenarios, a rigorous filming process built on key geometric considerations and equipment development to enable a separation of 6.5 cm between two commodity cameras had to be undertaken. Within safety limits, this could be an invaluable tool in enabling more accurate investigations of the associations between evasive Ssg and ACL injury risk.

  16. Robust and Accurate Algorithm for Wearable Stereoscopic Augmented Reality with Three Indistinguishable Markers

    Directory of Open Access Journals (Sweden)

    Fabrizio Cutolo

    2016-09-01

    Full Text Available In the context of surgical navigation systems based on augmented reality (AR, the key challenge is to ensure the highest degree of realism in merging computer-generated elements with live views of the surgical scene. This paper presents an algorithm suited for wearable stereoscopic augmented reality video see-through systems for use in a clinical scenario. A video-based tracking solution is proposed that relies on stereo localization of three monochromatic markers rigidly constrained to the scene. A PnP-based optimization step is introduced to refine separately the pose of the two cameras. Video-based tracking methods using monochromatic markers are robust to non-controllable and/or inconsistent lighting conditions. The two-stage camera pose estimation algorithm provides sub-pixel registration accuracy. From a technological and an ergonomic standpoint, the proposed approach represents an effective solution to the implementation of wearable AR-based surgical navigation systems wherever rigid anatomies are involved.

  17. Formalizing the potential of stereoscopic 3D user experience in interactive entertainment

    Science.gov (United States)

    Schild, Jonas; Masuch, Maic

    2015-03-01

    The use of stereoscopic 3D vision affects how interactive entertainment has to be developed as well as how it is experienced by the audience. The large amount of possibly impacting factors and variety as well as a certain subtlety of measured effects on user experience make it difficult to grasp the overall potential of using S3D vision. In a comprehensive approach, we (a) present a development framework which summarizes possible variables in display technology, content creation and human factors, and (b) list a scheme of S3D user experience effects concerning initial fascination, emotions, performance, and behavior as well as negative feelings of discomfort and complexity. As a major contribution we propose a qualitative formalization which derives dependencies between development factors and user effects. The argumentation is based on several previously published user studies. We further show how to apply this formula to identify possible opportunities and threats in content creation as well as how to pursue future steps for a possible quantification.

  18. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Science.gov (United States)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  19. Gain-scheduling control of a monocular vision-based human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-08-01

    Full Text Available , R. and Zisserman, A. (2004). Multiple View Geometry in Computer Vision. Cambridge University Press, 2nd edition. Hutchinson, S., Hager, G., and Corke, P. (1996). A tutorial on visual servo control. IEEE Trans. on Robotics and Automation, 12... environment, in a passive manner, at relatively high speeds and low cost. The control of mobile robots using vision in the feed- back loop falls into the well-studied field of visual servo control. Two primary approaches are used: image-based visual...

  20. Holistic processing for bodies and body parts: New evidence from stereoscopic depth manipulations.

    Science.gov (United States)

    Harris, Alison; Vyas, Daivik B; Reed, Catherine L

    2016-10-01

    Although holistic processing has been documented extensively for upright faces, it is unclear whether it occurs for other visual categories with more extensive substructure, such as body postures. Like faces, body postures have high social relevance, but they differ in having fine-grain organization not only of basic parts (e.g., arm) but also subparts (e.g., elbow, wrist, hand). To compare holistic processing for whole bodies and body parts, we employed a novel stereoscopic depth manipulation that creates either the percept of a whole body occluded by a set of bars, or of segments of a body floating in front of a background. Despite sharing low-level visual properties, only the stimulus perceived as being behind bars should be holistically "filled in" via amodal completion. In two experiments, we tested for better identification of individual body parts within the context of a body versus in isolation. Consistent with previous findings, recognition of body parts was better in the context of a whole body when the body was amodally completed behind occluders. However, when the same bodies were perceived as floating in strips, performance was significantly worse, and not significantly different, from that for amodally completed parts, supporting holistic processing of body postures. Intriguingly, performance was worst for parts in the frontal depth condition, suggesting that these effects may extend from gross body organization to a more local level. These results provide suggestive evidence that holistic representations may not be "all-or-none," but rather also operate on body regions of more limited spatial extent.

  1. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing

  2. Analysis the macular ganglion cell complex thickness in monocular strabismic amblyopia patients by Fourier-domain OCT

    Directory of Open Access Journals (Sweden)

    Hong-Wei Deng

    2014-11-01

    Full Text Available AIM: To detect the macular ganglion cell complex thickness in monocular strabismus amblyopia patients, in order to explore the relationship between the degree of amblyopia and retinal ganglion cell complex thickness, and found out whether there is abnormal macular ganglion cell structure in strabismic amblyopia. METHODS: Using a fourier-domain optical coherence tomography(FD-OCTinstrument iVue®(Optovue Inc, Fremont, CA, Macular ganglion cell complex(mGCCthickness was measured and statistical the relation rate with the best vision acuity correction was compared Gman among 26 patients(52 eyesincluded in this study. RESULTS: The mean thickness of the mGCC in macular was investigated into three parts: centrial, inner circle(3mmand outer circle(6mm. The mean thicknesses of mGCC in central, inner and outer circle was 50.74±21.51μm, 101.4±8.51μm, 114.2±9.455μm in the strabismic amblyopia eyes(SAE, and 43.79±11.92μm,92.47±25.01μm, 113.3±12.88μm in the contralateral sound eyes(CSErespectively. There was no statistically significant difference among the eyes(P>0.05. But the best corrected vision acuity had a good correlation rate between mGcc thicknesses, which was better relative for the lower part than the upper part.CONCLUSION:There is a relationship between the amblyopia vision acuity and the mGCC thickness. Although there has not statistically significant difference of the mGCC thickness compared with the SAE and CSE. To measure the macular center mGCC thickness in clinic may understand the degree of amblyopia.

  3. Layer- and cell-type-specific subthreshold and suprathreshold effects of long-term monocular deprivation in rat visual cortex.

    Science.gov (United States)

    Medini, Paolo

    2011-11-23

    Connectivity and dendritic properties are determinants of plasticity that are layer and cell-type specific in the neocortex. However, the impact of experience-dependent plasticity at the level of synaptic inputs and spike outputs remains unclear along vertical cortical microcircuits. Here I compared subthreshold and suprathreshold sensitivity to prolonged monocular deprivation (MD) in rat binocular visual cortex in layer 4 and layer 2/3 pyramids (4Ps and 2/3Ps) and in thick-tufted and nontufted layer 5 pyramids (5TPs and 5NPs), which innervate different extracortical targets. In normal rats, 5TPs and 2/3Ps are the most binocular in terms of synaptic inputs, and 5NPs are the least. Spike responses of all 5TPs were highly binocular, whereas those of 2/3Ps were dominated by either the contralateral or ipsilateral eye. MD dramatically shifted the ocular preference of 2/3Ps and 4Ps, mostly by depressing deprived-eye inputs. Plasticity was profoundly different in layer 5. The subthreshold ocular preference shift was sevenfold smaller in 5TPs because of smaller depression of deprived inputs combined with a generalized loss of responsiveness, and was undetectable in 5NPs. Despite their modest ocular dominance change, spike responses of 5TPs consistently lost their typically high binocularity during MD. The comparison of MD effects on 2/3Ps and 5TPs, the main affected output cells of vertical microcircuits, indicated that subthreshold plasticity is not uniquely determined by the initial degree of input binocularity. The data raise the question of whether 5TPs are driven solely by 2/3Ps during MD. The different suprathreshold plasticity of the two cell populations could underlie distinct functional deficits in amblyopia.

  4. Chronic intraventricular administration of lysergic acid diethylamide (LSD) affects the sensitivity of cortical cells to monocular deprivation.

    Science.gov (United States)

    McCall, M A; Tieman, D G; Hirsch, H V

    1982-11-04

    In kittens, but not in adult cats, depriving one eye of pattern vision by suturing the lids shut (monocular deprivation or MD) for one week reduces the proportion of binocular units in the visual cortex. A sensitivity of cortical units in adult cats to MD can be produced by infusing exogenous monoamines into the visual cortex. Since LSD interacts with monoamines, we have examined the effects of chronic administration of LSD on the sensitivity to MD for cortical cells in adult cats. Cats were assigned randomly to one of four conditions: MD/LSD, MD/No-LSD, No-MD/LSD, No-MD/No-LSD. An osmotic minipump delivered either LSD or the vehicle solution alone during a one-week period of MD. The animals showed no obvious anomalies during the administration of the drug. After one week the response properties of single units in area 17 of the visual cortex were studied without knowledge of the contents of the individual minipumps. With the exception of ocular dominance, the response properties of units recorded in all animals did not differ from normal. In the control animals (MD/No-LSD, No-MD/LSD, No-MD/No-LSD) the average proportion of binocular cells was 78%; similar to that observed for normal adult cats. However, in the experimental animals, which received LSD during the period of MD, only 52% of the cells were binocular. Our results suggest that chronic intraventricular administration of LSD affects either directly or indirectly the sensitivity of cortical neurons to MD.

  5. Capturing age-related changes in functional contrast sensitivity with decreasing light levels in monocular and binocular vision.

    Science.gov (United States)

    Gillespie-Gallery, Hanna; Konstantakopoulou, Evgenia; Harlow, Jonathan A; Barbur, John L

    2013-09-09

    It is challenging to separate the effects of normal aging of the retina and visual pathways independently from optical factors, decreased retinal illuminance, and early stage disease. This study determined limits to describe the effect of light level on normal, age-related changes in monocular and binocular functional contrast sensitivity. We recruited 95 participants aged 20 to 85 years. Contrast thresholds for correct orientation discrimination of the gap in a Landolt C optotype were measured using a 4-alternative, forced-choice (4AFC) procedure at screen luminances from 34 to 0.12 cd/m(2) at the fovea and parafovea (0° and ±4°). Pupil size was measured continuously. The Health of the Retina index (HRindex) was computed to capture the loss of contrast sensitivity with decreasing light level. Participants were excluded if they exhibited performance outside the normal limits of interocular differences or HRindex values, or signs of ocular disease. Parafoveal contrast thresholds showed a steeper decline and higher correlation with age at the parafovea than the fovea. Of participants with clinical signs of ocular disease, 83% had HRindex values outside the normal limits. Binocular summation of contrast signals declined with age, independent of interocular differences. The HRindex worsens more rapidly with age at the parafovea, consistent with histologic findings of rod loss and its link to age-related degenerative disease of the retina. The HRindex and interocular differences could be used to screen for and separate the earliest stages of subclinical disease from changes caused by normal aging.

  6. Study of high-definition and stereoscopic head-aimed vision for improved teleoperation of an unmanned ground vehicle

    Science.gov (United States)

    Tyczka, Dale R.; Wright, Robert; Janiszewski, Brian; Chatten, Martha Jane; Bowen, Thomas A.; Skibba, Brian

    2012-06-01

    Nearly all explosive ordnance disposal robots in use today employ monoscopic standard-definition video cameras to relay live imagery from the robot to the operator. With this approach, operators must rely on shadows and other monoscopic depth cues in order to judge distances and object depths. Alternatively, they can contact an object with the robot's manipulator to determine its position, but that approach carries with it the risk of detonation from unintentionally disturbing the target or nearby objects. We recently completed a study in which high-definition (HD) and stereoscopic video cameras were used in addition to conventional standard-definition (SD) cameras in order to determine if higher resolutions and/or stereoscopic depth cues improve operators' overall performance of various unmanned ground vehicle (UGV) tasks. We also studied the effect that the different vision modes had on operator comfort. A total of six different head-aimed vision modes were used including normal-separation HD stereo, SD stereo, "micro" (reduced separation) SD stereo, HD mono, and SD mono (two types). In general, the study results support the expectation that higher resolution and stereoscopic vision aid UGV teleoperation, but the degree of improvement was found to depend on the specific task being performed; certain tasks derived notably more benefit from improved depth perception than others. This effort was sponsored by the Joint Ground Robotics Enterprise under Robotics Technology Consortium Agreement #69-200902 T01. Technical management was provided by the U.S. Air Force Research Laboratory's Robotics Research and Development Group at Tyndall AFB, Florida.

  7. Long-Term Visual Training Increases Visual Acuity and Long-Term Monocular Deprivation Promotes Ocular Dominance Plasticity in Adult Standard Cage-Raised Mice.

    Science.gov (United States)

    Hosang, Leon; Yusifov, Rashad; Löwel, Siegrid

    2018-01-01

    For routine behavioral tasks, mice predominantly rely on olfactory cues and tactile information. In contrast, their visual capabilities appear rather restricted, raising the question whether they can improve if vision gets more behaviorally relevant. We therefore performed long-term training using the visual water task (VWT): adult standard cage (SC)-raised mice were trained to swim toward a rewarded grating stimulus so that using visual information avoided excessive swimming toward nonrewarded stimuli. Indeed, and in contrast to old mice raised in a generally enriched environment (Greifzu et al., 2016), long-term VWT training increased visual acuity (VA) on average by more than 30% to 0.82 cycles per degree (cyc/deg). In an individual animal, VA even increased to 1.49 cyc/deg, i.e., beyond the rat range of VAs. Since visual experience enhances the spatial frequency threshold of the optomotor (OPT) reflex of the open eye after monocular deprivation (MD), we also quantified monocular vision after VWT training. Monocular VA did not increase reliably, and eye reopening did not initiate a decline to pre-MD values as observed by optomotry; VA values rather increased by continued VWT training. Thus, optomotry and VWT measure different parameters of mouse spatial vision. Finally, we tested whether long-term MD induced ocular dominance (OD) plasticity in the visual cortex of adult [postnatal day (P)162-P182] SC-raised mice. This was indeed the case: 40-50 days of MD induced OD shifts toward the open eye in both VWT-trained and, surprisingly, also in age-matched mice without VWT training. These data indicate that (1) long-term VWT training increases adult mouse VA, and (2) long-term MD induces OD shifts also in adult SC-raised mice.

  8. Recovering stereo vision by squashing virtual bugs in a virtual reality environment.

    Science.gov (United States)

    Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M

    2016-06-19

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Author(s).

  9. Recovering stereo vision by squashing virtual bugs in a virtual reality environment

    Science.gov (United States)

    Vedamurthy, Indu; Knill, David C.; Huang, Samuel J.; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne

    2016-01-01

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity—the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task—a ‘bug squashing’ game—in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269607

  10. Line-based monocular graph SLAM algorithm%基于图优化的单目线特征SLAM算法

    Institute of Scientific and Technical Information of China (English)

    董蕊芳; 柳长安; 杨国田; 程瑞营

    2017-01-01

    A new line based 6-DOF monocular algorithm for using graph simultaneous localization and mapping(SLAM) algoritm was proposed.First,the straight line were applied as a feature instead of points,due to a map consisting of a sparse set of 3D points is unable to describe the structure of the surrounding world.Secondly,most of previous line-based SLAM algorithms were focused on filtering-based solutions suffering from the inconsistent when applied to the inherently non-linear SLAM problem,in contrast,the graph-based solution was used to improve the accuracy of the localization and the consistency of mapping.Thirdly,a special line representation was exploited for combining the Plücker coordinates with the Cayley representation.The Plücker coordinates were used for the 3D line projection function,and the Cayley representation helps to update the line parameters during the non-linear optimization process.Finally,the simulation experiment shows that the proposed algorithm outperforms odometry and EKF-based SLAM in terms of the pose estimation,while the sum of the squared errors (SSE) and root-mean-square error (RMSE) of proposed method are 2.5% and 10.5% of odometry,and 22.4% and 33% of EKF-based SLAM.The reprojection error is only 45.5 pixels.The real image experiment shows that the proposed algorithm obtains only 958 cm2 and 3.941 3 cm the SSE and RMSE of pose estimation.Therefore,it can be concluded that the proposed algorithm is effective and accuracy.%提出了基于图优化的单目线特征同时定位和地图构建(SLAM)的方法.首先,针对主流视觉SLAM算法因采用点作为特征而导致构建的点云地图稀疏、难以准确表达环境结构信息等缺点,采用直线作为特征来构建地图.然后,根据现有线特征的SLAM算法都是基于滤波器的SLAM框架、存在线性化及更新效率的问题,采用基于图优化的SLAM解决方案以提高定位精度及地图构建的一致性和准确性.将线特征的Plücker坐

  11. Visual discomfort while watching stereoscopic three-dimensional movies at the cinema.

    Science.gov (United States)

    Zeri, Fabrizio; Livi, Stefano

    2015-05-01

    This study investigates discomfort symptoms while watching Stereoscopic three-dimensional (S3D) movies in the 'real' condition of a cinema. In particular, it had two main objectives: to evaluate the presence and nature of visual discomfort while watching S3D movies, and to compare visual symptoms during S3D and 2D viewing. Cinema spectators of S3D or 2D films were interviewed by questionnaire at the theatre exit of different multiplex cinemas immediately after viewing a movie. A total of 854 subjects were interviewed (mean age 23.7 ± 10.9 years; range 8-81 years; 392 females and 462 males). Five hundred and ninety-nine of them viewed different S3D movies, and 255 subjects viewed a 2D version of a film seen in S3D by 251 subjects from the S3D group for a between-subjects design for that comparison. Exploratory factor analysis revealed two factors underlying symptoms: External Symptoms Factors (ESF) with a mean ± S.D. symptom score of 1.51 ± 0.58 comprised of eye burning, eye ache, eye strain, eye irritation and tearing; and Internal Symptoms Factors (ISF) with a mean ± S.D. symptom score of 1.38 ± 0.51 comprised of blur, double vision, headache, dizziness and nausea. ISF and ESF were significantly correlated (Spearman r = 0.55; p = 0.001) but with external symptoms significantly higher than internal ones (Wilcoxon Signed-ranks test; p = 0.001). The age of participants did not significantly affect symptoms. However, females had higher scores than males for both ESF and ISF, and myopes had higher ISF scores than hyperopes. Newly released movies provided lower ESF scores than older movies, while the seat position of spectators had minimal effect. Symptoms while viewing S3D movies were significantly and negatively correlated to the duration of wearing S3D glasses. Kruskal-Wallis results showed that symptoms were significantly greater for S3D compared to those of 2D movies, both for ISF (p = 0.001) and for ESF (p = 0.001). In short, the analysis of the symptoms

  12. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    CERN Document Server

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  13. Three-dimensional particle image velocimetry in a generic can-type gas turbine combustor

    CSIR Research Space (South Africa)

    Meyers, BC

    2009-09-01

    Full Text Available The three-dimensional flow field inside a generic can-type, forward flow, experimental combustor was measured. A stereoscopic Particle Image Velocimetry (PIV) system was used to obtain the flow field of the combustor in the non-reacting condition...

  14. Percepção monocular da profundidade ou relevo na ilusão da máscara côncava na esquizofrenia

    Directory of Open Access Journals (Sweden)

    Arthur Alves

    2014-03-01

    Full Text Available Este trabalho foi desenvolvido com o propósito de investigar a percepção monocular da profundidade ou relevo da máscara côncava por 29 indivíduos saudáveis, sete indivíduos com esquizofrenia sob uso de antipsicótico por um período inferior ou igual a quatro semanas e 29 sob uso de antipsicótico por um período superior a quatro semanas. Os três grupos classificaram o reverso de uma máscara policromada em duas situações de iluminação, por cima e por baixo. Os resultados indicaram que a maioria dos indivíduos com esquizofrenia inverteu a profundidade da máscara côncava na condição de observação monocular e perceberam-na como convexa, sendo, portanto, suscetíveis à ilusão da máscara côncava. Os indivíduos com esquizofrenia sob uso de medicação antipsicótica pelo período superior a quatro semanas estimaram a convexidade da máscara côncava iluminada por cima em menor comprimento comparados aos indivíduos saudáveis.

  15. Estimating 3D tilt from local image cues in natural scenes

    OpenAIRE

    Burge, Johannes; McCann, Brian C.; Geisler, Wilson S.

    2016-01-01

    Estimating three-dimensional (3D) surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues from the same location (disparity gradient, luminance gradient, and dominant texture orientation) should be combined to estimate 3D tilt in natural scenes. We collected a database of natural stereoscopic images with precisely co-registered range images that provide the ground-truth distance at each pixel location. We then ana...

  16. Improving maps of ice-sheet surface elevation change using combined laser altimeter and stereoscopic elevation model data

    DEFF Research Database (Denmark)

    Fredenslund Levinsen, Joanna; Howat, I. M.; Tscherning, C. C.

    2013-01-01

    We combine the complementary characteristics of laser altimeter data and stereoscopic digital elevation models (DEMs) to construct high-resolution (_100 m) maps of surface elevations and elevation changes over rapidly changing outlet glaciers in Greenland. Measurements from spaceborne and airborne...... laser altimeters have relatively low errors but are spatially limited to the ground tracks, while DEMs have larger errors but provide spatially continuous surfaces. The principle of our method is to fit the DEM surface to the altimeter point clouds in time and space to minimize the DEM errors and use...... that surface to extrapolate elevations away from altimeter flight lines. This reduces the DEM registration errors and fills the gap between the altimeter paths. We use data from ICESat and ATM as well as SPOT 5 DEMs from 2007 and 2008 and apply them to the outlet glaciers Jakobshavn Isbræ (JI...

  17. 3-D flow characterization and shear stress in a stenosed carotid artery bifurcation model using stereoscopic PIV technique.

    Science.gov (United States)

    Kefayati, Sarah; Poepping, Tamie L

    2010-01-01

    The carotid artery bifurcation is a common site of atherosclerosis which is a major leading cause of ischemic stroke. The impact of stenosis in the atherosclerotic carotid artery is to disturb the flow pattern and produce regions with high shear rate, turbulence, and recirculation, which are key hemodynamic factors associated with plaque rupture, clot formation, and embolism. In order to characterize the disturbed flow in the stenosed carotid artery, stereoscopic PIV measurements were performed in a transparent model with 50% stenosis under pulsatile flow conditions. Simulated ECG gating of the flowrate waveform provides external triggering required for volumetric reconstruction of the complex flow patterns. Based on the three-component velocity data in the lumen region, volumetric shear-stress patterns were derived.

  18. Four-dimensional image display for associated particle imaging

    International Nuclear Information System (INIS)

    Headley, G.; Beyerle, A.; Durkee, R.; Hurley, P.; Tunnell, L.

    1994-01-01

    Associated particle imaging (API) is a three-dimensional neutron gamma imaging technique which provides both spatial and spectral information about an unknown. A local area network consisting of a UNIX fileserver and multiple DOS workstations has been chosen to perform the data acquisition and display functions. The data are acquired with a CAMAC system, stored in list mode, and sorted on the fileserver for display on the DOS workstations. Three of the display PCs, interacting with the fileserver, provide coordinated views as the operator ''slices'' the image. The operator has a choice of: a one-dimensional shadowgram from any side, two-dimensional shadowgrams from any side; a three-dimensional view (either perspective projection or stereoscopic). A common color scheme is used to carry energy information into the spatial images. ((orig.))

  19. Imaging

    International Nuclear Information System (INIS)

    Kellum, C.D.; Fisher, L.M.; Tegtmeyer, C.J.

    1987-01-01

    This paper examines the advantages of the use of excretory urography for diagnosis. According to the authors, excretory urography remains the basic radiologic examination of the urinary tract and is the foundation for the evaluation of suspected urologic disease. Despite development of the newer diagnostic modalities such as isotope scanning, ultrasonography, CT, and magnetic resonsance imaging (MRI), excretory urography has maintained a prominent role in ruorradiology. Some indications have been altered and will continue to change with the newer imaging modalities, but the initial evaluation of suspected urinary tract structural abnormalities; hematuria, pyuria, and calculus disease is best performed with excretory urography. The examination is relatively inexpensive and simple to perform, with few contraindictions. Excretory urography, when properly performed, can provide valuable information about the renal parenchyma, pelvicalyceal system, ureters, and urinary bladder

  20. Image Based Biomarker of Breast Cancer Risk: Analysis of Risk Disparity Among Minority Populations

    Science.gov (United States)

    2014-03-01

    cluster locations. In the undirected strategy, the PDF is uniform within the entire volume of the breast , while in...stereoscopic breast biopsy images (13, 14). Each cluster in the database is stored as a 3D binary volume, with a voxel value of ‘1’ representing...AD_________________ Award Number: W81XWH-09-1-0062 TITLE: Image Based Biomarker of Breast Cancer

  1. Attack of the S. Mutans!: a stereoscopic-3D multiplayer direct-manipulation behavior-modification serious game for improving oral health in pre-teens

    Science.gov (United States)

    Hollander, Ari; Rose, Howard; Kollin, Joel; Moss, William

    2011-03-01

    Attack! of the S. Mutans is a multi-player game designed to harness the immersion and appeal possible with wide-fieldof- view stereoscopic 3D to combat the tooth decay epidemic. Tooth decay is one of the leading causes of school absences and costs more than $100B annually in the U.S. In 2008 the authors received a grant from the National Institutes of Health to build a science museum exhibit that included a suite of serious games involving the behaviors and bacteria that cause cavities. The centerpiece is an adventure game where five simultaneous players use modified Wii controllers to battle biofilms and bacteria while immersed in environments generated within a 11-foot stereoscopic WUXGA display. The authors describe the system and interface used in this prototype application and some of the ways they attempted to use the power of immersion and the appeal of S3D revolution to change health attitudes and self-care habits.

  2. Range and variability in gesture-based interactions with medical images : do non-stereo versus stereo visualizations elicit different types of gestures?

    NARCIS (Netherlands)

    Beurden, van M.H.P.H.; IJsselsteijn, W.A.

    2010-01-01

    The current paper presents a study into the range and variability of natural gestures when interacting with medical images, using traditional non stereo and stereoscopic modes of presentation. The results have implications for the design of computer-vision algorithms developed to support natural

  3. Setup accuracy of stereoscopic X-ray positioning with automated correction for rotational errors in patients treated with conformal arc radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Soete, Guy; Verellen, Dirk; Tournel, Koen; Storme, Guy

    2006-01-01

    We evaluated setup accuracy of NovalisBody stereoscopic X-ray positioning with automated correction for rotational errors with the Robotics Tilt Module in patients treated with conformal arc radiotherapy for prostate cancer. The correction of rotational errors was shown to reduce random and systematic errors in all directions. (NovalisBody TM and Robotics Tilt Module TM are products of BrainLAB A.G., Heimstetten, Germany)

  4. A comparison of cup-to-disc ratio estimates by fundus biomicroscopy and stereoscopic optic disc photography in the Tema Eye Survey.

    Science.gov (United States)

    Mwanza, J C; Grover, D S; Budenz, D L; Herndon, L W; Nolan, W; Whiteside-de Vos, J; Hay-Smith, G; Bandi, J R; Bhansali, K A; Forbes, L A; Feuer, W J; Barton, K

    2017-08-01

    PurposeTo determine if there are systematic differences in cup-to-disc ratio (CDR) grading using fundus biomicroscopy compared to stereoscopic disc photograph reading.MethodsThe vertical cup-to-disc ratio (VCDR) and horizontal cup-to-disc ratio (HCDR) of 2200 eyes (testing set) were graded by glaucoma subspecialists through fundus biomicroscopy and by a reading center using stereoscopic disc photos. For validation, the glaucoma experts also estimated VCDR and HCDR using stereoscopic disc photos in a subset of 505 eyes that they had assessed biomicroscopically. Agreement between grading methods was assessed with Bland-Altman plots.ResultsIn both sets, photo reading tended to yield small CDRs marginally larger, but read large CDRs marginally smaller than fundus biomicroscopy. The mean differences in VCDR and HCDR were 0.006±0.18 and 0.05±0.18 (testing set), and -0.053±0.23 and -0.028±0.21 (validation set), respectively. The limits of agreement were ~0.4, which is twice as large as the cutoff of clinically significant CDR difference between methods. CDR estimates differed by 0.2 or more in 33.8-48.7% between methods.ConclusionsThe differences in CDR estimates between fundus biomicroscopy and stereoscopic optic disc photo reading showed a wide variation, and reached clinically significance threshold in a large proportion of patients, suggesting a poor agreement. Thus, glaucoma should be monitored by comparing baseline and subsequent CDR estimates using the same method rather than comparing photographs to fundus biomicroscopy.

  5. Interpolated sagittal and coronal reconstruction of CT images in the screening of neck abnormalities

    International Nuclear Information System (INIS)

    Koga, Issei

    1983-01-01

    Recontructed sagittal and coronal images were analyzed for their usefulness during clinical applications and to determine the correct use of recontruction techniques. Recontructed stereoscopic images can be formed by continuous or interrupted image reconstruction using interpolation. This study showed that lesions less than 10 mm in diameter should be made continuously and recontructed with uninterrupted technique. However, 5 mm interrupted distances are acceptable for interpolated reconstruction except in cases of lesions less than 10 mm in diameter. Clinically, interpolated reconstruction is not adequated for semicircular lesions less than 10 mm. Blood vessels and linear lesions are good condiated for the application of interpolated recontruction. Reconstruction of images using interrupted interpolation is therefore recommended for screening and for demonstrating correct stereoscopic information, except cases of small lesions less than 10 mm in diameter. Results of this study underscore the fact that obscure information in transverse CT images should be routinely utilized by interporating recontruction techniques, if transverse images are not made continuously. Interpolated recontruction may be helpful in obtaining stereoscopic information. (author)

  6. Clinically Normal Stereopsis Does Not Ensure Performance Benefit from Stereoscopic 3D Depth Cues

    Science.gov (United States)

    2014-10-28

    Government formulated or supplied the drawings, specifications, or other data does not license the holder or any other person or corporation ; or convey any...participants ( NVIDIA Personal GeForce 3D Vision Active Shutter Glasses, and Samsung SyncMaster 2233RZ). This display was a 22-inch diagonal LCD display with...The display was a 22-inch diagonal 120Hz LCD, with a resolution of 1680 x 1050. Image adapted from Samsung Syncmaster and NVidia GeForce

  7. Stereoscopic displays for virtual reality in the car manufacturing industry: application to design review and ergonomic studies

    Science.gov (United States)

    Moreau, Guillaume; Fuchs, Philippe

    2002-05-01

    In the car manufacturing industry the trend is to drastically reduce the time-to-market by increasing the use of the Digital Mock-up instead of physical prototypes. Design review and ergonomic studies are specific tasks because they involve qualitative or even subjective judgements. In this paper, we present IMAVE (IMmersion Adapted to a VEhicle) designed for immersive styling review, gaps visualization and simple ergonomic studies. We show that stereoscopic displays are necessary and must fulfill several constraints due to the proximity and size of the car dashboard. The duration fo the work sessions forces us to eliminate all vertical parallax, and 1:1 scale is obviously required for a valid immersion. Two demonstrators were realized allowing us to have a large set of testers (over 100). More than 80% of the testers saw an immediate use of the IMAVE system. We discuss the good and bad marks awarded to the system. Future work include being able to use several rear-projected stereo screens for doors and central console visualization, but without the parallax presently visible in some CAVE-like environments.

  8. Figure and Ground in the Visual Cortex: V2 Combines Stereoscopic Cues with Gestalt Rules

    OpenAIRE

    Qiu, Fangtu T.; von der Heydt, Rüdiger

    2005-01-01

    Figure-ground organization is a process by which the visual system identifies some image regions as foreground and others as background, inferring three-dimensional (3D) layout from 2D displays. A recent study reported that edge responses of neurons in area V2 are selective for side-of-figure, suggesting that figure-ground organization is encoded in the contour signals (border-ownership coding). Here we show that area V2 combines two strategies of computation, one that exploits binocular ster...

  9. Stereoscopic CAD and Environmental Sculpture: Enhancement of the Design Process in the Visual Arts

    Science.gov (United States)

    Fisher, Robert N.; Bandini, Pier L.

    1989-09-01

    In this paper, co-authors Robert Fisher and Pier Luigi Bandini describe their personal observations concerning stereo enhancements of computer graphics images employed in their research. in Part One, Robert Fisher, a professional sculptor, Professor and Artist-in-Residence in the College of Engineering at Penn State, cites three recent environmental sculpture projects: "See-scape," "A Page from the Book of Skies," and an as yet untitled work. Wireframe images, interior views of architectural spaces, and complex imagery are rendered comprehensible by stereo 3-D. In Part Two, Pier L. Bandini, Associate Professor of Architecture and Director of the Architecture CAD Lab at Penn State, describes the virtues of the stereo-enhanced wireframe model--the benefits of the "see-through coupled with a complete awareness of the whole space." The final example, of a never-realized XVIII-century project, suggests a new and profound application of stereo 3-D to historical inquiry, namely, the experience of ancient spaces and structures that are no longer existing or that were never constructed.

  10. CROSSPLOT-3/CON-3D, 3-D and Stereoscopic Computer-Aided Design Graphics

    International Nuclear Information System (INIS)

    Grotch, S.L.

    1986-01-01

    Description of program or function: CROSSPLOT3 is a general three- dimensional point plotting program which generates scatterplots of a data matrix from any user-specified viewpoint. Images can be rotated for a movie-like effect enhancing stereo perception. A number of features can be invoked by the user including: color, class distinction, flickering, sectioning, projections to grid surfaces, and drawing a plane. Plots may be viewed in real time as they are generated. CON3D generates three-dimensional surfaces plus contours on a lower plane from either data on a rectangular grid or an analytical function z=f(x,y). The user may choose any viewing perspective. Plots may be generated in color with many refinements under user control

  11. A comparative study of manhole hydraulics using stereoscopic PIV and different RANS models.

    Science.gov (United States)

    Beg, Md Nazmul Azim; Carvalho, Rita F; Tait, Simon; Brevis, Wernher; Rubinato, Matteo; Schellart, Alma; Leandro, Jorge

    2017-04-01

    Flows in manholes are complex and may include swirling and recirculation flow with significant turbulence and vorticity. However, how these complex 3D flow patterns could generate different energy losses and so affect flow quantity in the wider sewer network is unknown. In this work, 2D3C stereo Particle Image Velocimetry measurements are made in a surcharged scaled circular manhole. A computational fluid dynamics (CFD) model in OpenFOAM ® with four different Reynolds Averaged Navier Stokes (RANS) turbulence model is constructed using a volume of fluid model, to represent flows in this manhole. Velocity profiles and pressure distributions from the models are compared with the experimental data in view of finding the best modelling approach. It was found among four different RANS models that the re-normalization group (RNG) k-ɛ and k-ω shear stress transport (SST) gave a better approximation for velocity and pressure.

  12. Resultados preliminares de um sistema computadorizado e estereoscópico para pupilometria in vivo Preliminary results of a computerized and stereoscopic system for in vivo pupillometry

    Directory of Open Access Journals (Sweden)

    Luis Alberto Vieira de Carvalho

    2008-12-01

    ophthalmoscope helmet and a typical diving mask as support for a high-resolution and sensitivity CCD. Using an IBM compatible computer sequences of video in AVI format were digitized for several seconds at a mean rate of 30 Hz. Algorithms using principles of image processing were implemented for detection of the pupil edges. RESULTS: We present preliminary results of this system for a voluntary patient. Data for the horizontal (x and vertical (y central position and for the diameter of the pupil were then exported to files that could be read by typical spread sheet programs (Excel. CONCLUSIONS: In this manner, precise data can be obtained stereoscopically (for both pupils at the same time for any patient, given that the accommodation process is guaranteed by using a white LED virtual mire located 6 meters from the patient's eye. An electronic board precisely controls the level of illumination. We believe here developed instrument may be useful in certain ophthalmic practices where precise pupil geometric data are needed.

  13. A survey of visually induced symptoms and associated factors in spectators of three dimensional stereoscopic movies

    Directory of Open Access Journals (Sweden)

    Solimini Angelo G

    2012-09-01

    Full Text Available Abstract Background The increasing popularity of commercial movies showing three dimensional (3D computer generated images has raised concern about image safety and possible side effects on population health. This study aims to (1 quantify the occurrence of visually induced symptoms suffered by the spectators during and after viewing a commercial 3D movie and (2 to assess individual and environmental factors associated to those symptoms. Methods A cross-sectional survey was carried out using a paper based, self administered questionnaire. The questionnaire includes individual and movie characteristics and selected visually induced symptoms (tired eyes, double vision, headache, dizziness, nausea and palpitations. Symptoms were queried at 3 different times: during, right after and after 2 hours from the movie. Results We collected 953 questionnaires. In our sample, 539 (60.4% individuals reported 1 or more symptoms during the movie, 392 (43.2% right after and 139 (15.3% at 2 hours from the movie. The most frequently reported symptoms were tired eyes (during the movie by 34.8%, right after by 24.0%, after 2 hours by 5.7% of individuals and headache (during the movie by 13.7%, right after by 16.8%, after 2 hours by 8.3% of individuals. Individual history for frequent headache was associated with tired eyes (OR = 1.34, 95%CI = 1.01-1.79, double vision (OR = 1.96; 95%CI = 1.13-3.41, headache (OR = 2.09; 95%CI = 1.41-3.10 during the movie and of headache after the movie (OR = 1.64; 95%CI = 1.16-2.32. Individual susceptibility to car sickness, dizziness, anxiety level, movie show time, animation 3D movie were also associated to several other symptoms. Conclusions The high occurrence of visually induced symptoms resulting from this survey suggests the need of raising public awareness on possible discomfort that susceptible individuals may suffer during and after the vision of 3D movies.

  14. Augmented reality system for oral surgery using 3D auto stereoscopic visualization.

    Science.gov (United States)

    Tran, Huy Hoang; Suenaga, Hideyuki; Kuwana, Kenta; Masamune, Ken; Dohi, Takeyoshi; Nakajima, Susumu; Liao, Hongen

    2011-01-01

    We present an augmented reality system for oral and maxillofacial surgery in this paper. Instead of being displayed on a separated screen, three-dimensional (3D) virtual presentations of osseous structures and soft tissues are projected onto the patient's body, providing surgeons with exact knowledge of depth information of high risk tissues inside the bone. We employ a 3D integral imaging technique which produce motion parallax in both horizontal and vertical direction over a wide viewing area in this study. In addition, surgeons are able to check the progress of the operation in real-time through an intuitive 3D based interface which is content-rich, hardware accelerated. These features prevent surgeons from penetrating into high risk areas and thus help improve the quality of the operation. Operational tasks such as hole drilling, screw fixation were performed using our system and showed an overall positional error of less than 1 mm. Feasibility of our system was also verified with a human volunteer experiment.

  15. A Topological Array Trigger for AGIS, the Advanced Gamma ray Imaging System

    Science.gov (United States)

    Krennrich, F.; Anderson, J.; Buckley, J.; Byrum, K.; Dawson, J.; Drake, G.; Haberichter, W.; Imran, A.; Krawczynski, H.; Kreps, A.; Schroedter, M.; Smith, A.

    2008-12-01

    Next generation ground based γ-ray observatories such as AGIS1 and CTA2 are expected to cover a 1 km2 area with 50-100 imaging atmospheric Cherenkov telescopes. The stereoscopic view ol air showers using multiple view points raises the possibility to use a topological array trigger that adds substantial flexibility, new background suppression capabilities and a reduced energy threshold. In this paper we report on the concept and technical implementation of a fast topological trigger system, that makes use of real time image processing of individual camera patterns and their combination in a stereoscopic array analysis. A prototype system is currently under construction and we discuss the design and hardware of this topological array trigger system.

  16. Early monocular defocus disrupts the normal development of receptive-field structure in V2 neurons of macaque monkeys.

    Science.gov (United States)

    Tao, Xiaofeng; Zhang, Bin; Shen, Guofu; Wensveen, Janice; Smith, Earl L; Nishimoto, Shinji; Ohzawa, Izumi; Chino, Yuzo M

    2014-10-08

    Experiencing different quality images in the two eyes soon after birth can cause amblyopia, a developmental vision disorder. Amblyopic humans show the reduced capacity for judging the relative position of a visual target in reference to nearby stimulus elements (position uncertainty) and often experience visual image distortion. Although abnormal pooling of local stimulus information by neurons beyond striate cortex (V1) is often suggested as a neural basis of these deficits, extrastriate neurons in the amblyopic brain have rarely been studied using microelectrode recording methods. The receptive field (RF) of neurons in visual area V2 in normal monkeys is made up of multiple subfields that are thought to reflect V1 inputs and are capable of encoding the spatial relationship between local stimulus features. We created primate models of anisometropic amblyopia and analyzed the RF subfield maps for multiple nearby V2 neurons of anesthetized monkeys by using dynamic two-dimensional noise stimuli and reverse correlation methods. Unlike in normal monkeys, the subfield maps of V2 neurons in amblyopic monkeys were severely disorganized: subfield maps showed higher heterogeneity within each neuron as well as across nearby neurons. Amblyopic V2 neurons exhibited robust binocular suppression and the strength of the suppression was positively correlated with the degree of hereogeneity and the severity of amblyopia in individual monkeys. Our results suggest that the disorganized subfield maps and robust binocular suppression of amblyopic V2 neurons are likely to adversely affect the higher stages of cortical processing resulting in position uncertainty and image distortion. Copyright © 2014 the authors 0270-6474/14/3413840-15$15.00/0.

  17. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle.

    Science.gov (United States)

    Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru

    2018-01-10

    We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.

  18. Diffuse nitrogen loss simulation and impact assessment of stereoscopic agriculture pattern by integrated water system model and consideration of multiple existence forms

    Science.gov (United States)

    Zhang, Yongyong; Gao, Yang; Yu, Qiang

    2017-09-01

    Agricultural nitrogen loss becomes an increasingly important source of water quality deterioration and eutrophication, even threatens water safety for humanity. Nitrogen dynamic mechanism is still too complicated to be well captured at watershed scale due to its multiple existence forms and instability, disturbance of agricultural management practices. Stereoscopic agriculture is a novel agricultural planting pattern to efficiently use local natural resources (e.g., water, land, sunshine, heat and fertilizer). It is widely promoted as a high yield system and can obtain considerable economic benefits, particularly in China. However, its environmental quality implication is not clear. In our study, Qianyanzhou station is famous for its stereoscopic agriculture pattern of Southern China, and an experimental watershed was selected as our study area. Regional characteristics of runoff and nitrogen losses were simulated by an integrated water system model (HEQM) with multi-objective calibration, and multiple agriculture practices were assessed to find the effective approach for the reduction of diffuse nitrogen losses. Results showed that daily variations of runoff and nitrogen forms were well reproduced throughout watershed, i.e., satisfactory performances for ammonium and nitrate nitrogen (NH4-N and NO3-N) loads, good performances for runoff and organic nitrogen (ON) load, and very good performance for total nitrogen (TN) load. The average loss coefficient was 62.74 kg/ha for NH4-N, 0.98 kg/ha for NO3-N, 0.0004 kg/ha for ON and 63.80 kg/ha for TN. The dominating form of nitrogen losses was NH4-N due to the applied fertilizers, and the most dramatic zones aggregated in the middle and downstream regions covered by paddy and orange orchard. In order to control diffuse nitrogen losses, the most effective practices for Qianyanzhou stereoscopic agriculture pattern were to reduce farmland planting scale in the valley by afforestation, particularly for orchard in the

  19. The TeV γ-ray binary PSR B1259-63. Observations with the high energy stereoscopic system in the years 2005-2007

    International Nuclear Information System (INIS)

    Kerschhaggl, Matthias

    2010-01-01

    PSR B1259-63/SS2883 is a binary system where a 48 ms pulsar orbits a massive Be star with a period of 3.4 years. The system exhibits variable, non-thermal radiation around periastron on the highly eccentric orbit (e=0.87) visible from radio to very high energies (VHE; E>100 GeV). When being detected in TeV γ-rays with the High Energy Stereoscopic System (H.E.S.S.) in 2004 it became known as the first variable galactic VHE source. This thesis presents VHE data from PSR B1259-63 as taken during the years 2005, 2006 and before as well as shortly after the 2007 periastron passage. These data extend the knowledge of the lightcurve of this object to all phases of the binary orbit. The lightcurve constrains physical mechanisms present in this TeV source. Observations of VHE γ-rays with the H.E.S.S. telescope array using the Imaging Atmospheric Cherenkov Technique were performed. The H.E.S.S. instrument features an angular resolution of stat ±0.2 sys and flux normalisation Φ 0 =(1.1±0.1 stat ±0.2 sys ) x 10 -12 TeV -1 cm -2 s -1 . PSR B1259-63 was also monitored in 2005 and 2006, far from periastron passage, comprising 8.9 h and 7.5 h of exposure, respectively. No significant excess of γ-rays is seen in those observations. PSR B1259-63 has been re-confirmed as a variable TeV γ-ray emitter. The firm detection of VHE photons emitted at a true anomaly θ∼0.35 of the pulsar orbit, i.e. already ∝50 days prior to the periastron passage, disfavors the stellar disc target scenario as a primary emission mechanism, based on current knowledge about the companion star's disc inclination, extension, and density profile. In a phenomenological study indirect evidence that PSR B1259-63 could in fact be a periodical VHE emitter is presented using the TeV data discussed in this work. While the TeV energy flux level seems to be only dependent on the binary separation this behavior is not seen in X-rays. Moreover, model calculations based on inverse compton (IC) scattering of

  20. The TeV {gamma}-ray binary PSR B1259-63. Observations with the high energy stereoscopic system in the years 2005-2007

    Energy Technology Data Exchange (ETDEWEB)

    Kerschhaggl, Matthias

    2010-04-06

    PSR B1259-63/SS2883 is a binary system where a 48 ms pulsar orbits a massive Be star with a period of 3.4 years. The system exhibits variable, non-thermal radiation around periastron on the highly eccentric orbit (e=0.87) visible from radio to very high energies (VHE; E>100 GeV). When being detected in TeV {gamma}-rays with the High Energy Stereoscopic System (H.E.S.S.) in 2004 it became known as the first variable galactic VHE source. This thesis presents VHE data from PSR B1259-63 as taken during the years 2005, 2006 and before as well as shortly after the 2007 periastron passage. These data extend the knowledge of the lightcurve of this object to all phases of the binary orbit. The lightcurve constrains physical mechanisms present in this TeV source. Observations of VHE {gamma}-rays with the H.E.S.S. telescope array using the Imaging Atmospheric Cherenkov Technique were performed. The H.E.S.S. instrument features an angular resolution of < 0.1 and an energy resolution of < 20%. Gamma-ray events in an energy range of 0.5-70 TeV were recorded. From these data, energy spectra and lightcurve with a monthly time sampling were extracted. VHE {gamma}-ray emission from PSRB1259-63 was detected with an overall significance of 9.5 standard deviations using 55 h of exposure, obtained from April to August 2007. The monthly flux of -rays during the observation period was measured, yielding VHE lightcurve data for the early pre-periastron phase of the system for the first time. No spectral variability was found on timescales of months. The spectrum is described by a power law with a photon index of {gamma}=2.8{+-}0.2{sub stat}{+-}0.2{sub sys} and flux normalisation {phi}{sub 0}=(1.1{+-}0.1{sub stat}{+-}0.2{sub sys}) x 10{sup -12} TeV{sup -1}cm{sup -2}s{sup -1}. PSR B1259-63 was also monitored in 2005 and 2006, far from periastron passage, comprising 8.9 h and 7.5 h of exposure, respectively. No significant excess of {gamma}-rays is seen in those observations. PSR B1259-63 has

  1. The TeV {gamma}-ray binary PSR B1259-63. Observations with the high energy stereoscopic system in the years 2005-2007

    Energy Technology Data Exchange (ETDEWEB)

    Kerschhaggl, Matthias

    2010-04-06

    PSR B1259-63/SS2883 is a binary system where a 48 ms pulsar orbits a massive Be star with a period of 3.4 years. The system exhibits variable, non-thermal radiation around periastron on the highly eccentric orbit (e=0.87) visible from radio to very high energies (VHE; E>100 GeV). When being detected in TeV {gamma}-rays with the High Energy Stereoscopic System (H.E.S.S.) in 2004 it became known as the first variable galactic VHE source. This thesis presents VHE data from PSR B1259-63 as taken during the years 2005, 2006 and before as well as shortly after the 2007 periastron passage. These data extend the knowledge of the lightcurve of this object to all phases of the binary orbit. The lightcurve constrains physical mechanisms present in this TeV source. Observations of VHE {gamma}-rays with the H.E.S.S. telescope array using the Imaging Atmospheric Cherenkov Technique were performed. The H.E.S.S. instrument features an angular resolution of < 0.1 and an energy resolution of < 20%. Gamma-ray events in an energy range of 0.5-70 TeV were recorded. From these data, energy spectra and lightcurve with a monthly time sampling were extracted. VHE {gamma}-ray emission from PSRB1259-63 was detected with an overall significance of 9.5 standard deviations using 55 h of exposure, obtained from April to August 2007. The monthly flux of -rays during the observation period was measured, yielding VHE lightcurve data for the early pre-periastron phase of the system for the first time. No spectral variability was found on timescales of months. The spectrum is described by a power law with a photon index of {gamma}=2.8{+-}0.2{sub stat}{+-}0.2{sub sys} and flux normalisation {phi}{sub 0}=(1.1{+-}0.1{sub stat}{+-}0.2{sub sys}) x 10{sup -12} TeV{sup -1}cm{sup -2}s{sup -1}. PSR B1259-63 was also monitored in 2005 and 2006, far from periastron passage, comprising 8.9 h and 7.5 h of exposure, respectively. No significant excess of {gamma}-rays is seen in those observations. PSR B1259-63 has

  2. Evaluation of the Role of Monocular Video Game Play as an Adjuvant to Occlusion Therapy in the Management of Anisometropic Amblyopia.

    Science.gov (United States)

    Singh, Archita; Sharma, Pradeep; Saxena, Rohit

    2017-07-01

    To evaluate the role of monocular video game play as an adjuvant to occlusion therapy in the treatment of anisometropic amblyopia. In a prospective randomized study design, 68 children with ages ranging from 6 to 14 years who had anisometropic amblyopia with a best corrected visual acuity (BCVA) in the amblyopic eye of better than 6/36 and worse than 6/12 and no manifest strabismus were recruited. They were randomly allocated into two groups: 34 children received 1 hour per day of video game play for the first month plus 6 hours per day of occlusion therapy (video game and occlusion group) and 34 children received 6 hours per day of occlusion therapy alone (occlusion only group). Patients were then evaluated at baseline and 1 and 3 months after treatment for BCVA, stereoacuity, and contrast sensitivity. In the video game and occlusion group, BCVA improved from 0.61 ± 0.12 logarithm of the minimum angle of resolution (logMAR) at baseline to 0.51 ± 0.14 logMAR (P = .001) at 1 month and 0.40 ± 0.15 logMAR (P = .001) at 3 months. In the occlusion only group, BCVA improved from 0.65 ± 0.09 logMAR at baseline to 0.60 ± 0.10 logMAR (P = .001) at 1 month and 0.48 ± 0.10 logMAR (P = .001) at 3 months. There was significantly more improvement in the video game and occlusion group compared to the occlusion only group (P = .003 at 1 month and P = .027 at 3 months). Video game play plus occlusion therapy enhances the visual recovery in anisometropic amblyopia. [J Pediatr Ophthalmol Strabismus. 2017;54(4):244-249.]. Copyright 2017, SLACK Incorporated.

  3. Object localization in handheld thermal images for fireground understanding

    Science.gov (United States)

    Vandecasteele, Florian; Merci, Bart; Jalalvand, Azarakhsh; Verstockt, Steven

    2017-05-01

    Despite the broad application of the handheld thermal imaging cameras in firefighting, its usage is mostly limited to subjective interpretation by the person carrying the device. As remedies to overcome this limitation, object localization and classification mechanisms could assist the fireground understanding and help with the automated localization, characterization and spatio-temporal (spreading) analysis of the fire. An automated understanding of thermal images can enrich the conventional knowledge-based firefighting techniques by providing the information from the data and sensing-driven approaches. In this work, transfer learning is applied on multi-labeling convolutional neural network architectures for object localization and recognition in monocular visual, infrared and multispectral dynamic images. Furthermore, the possibility of analyzing fire scene images is studied and their current limitations are discussed. Finally, the understanding of the room configuration (i.e., objects location) for indoor localization in reduced visibility environments and the linking with Building Information Models (BIM) are investigated.

  4. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D.

    Science.gov (United States)

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H

    2017-08-01

    Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.

  5. Retinal image quality during accommodation.

    Science.gov (United States)

    López-Gil, Norberto; Martin, Jesson; Liu, Tao; Bradley, Arthur; Díaz-Muñoz, David; Thibos, Larry N

    2013-07-01

    We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Subjects viewed a monochromatic (552 nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye's higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful

  6. Study of three-dimensional image display by systemic CT

    International Nuclear Information System (INIS)

    Fujioka, Tadao; Ebihara, Yoshiyuki; Unei, Hiroshi; Hayashi, Masao; Shinohe, Tooru; Wada, Yuji; Sakai, Takatsugu; Kashima, Kenji; Fujita, Yoshihiro

    1989-01-01

    A head phantom for CT was scanned at 2 mm intervals from the cervix to the vertex in an attempt to obtain a three-dimensional image display of bones and facial epidermis from an ordinary axial image. Clinically, three-dimensional images were formed at eye sockets and hip joints. With the three-dimensional image using the head phantom, the entire head could be displayed at any angle. Clinically, images were obtained that could not be attained by ordinary CT scanning, such as broken bones in eye sockets and stereoscopic structure at the bottom of a cranium. The three-dimensional image display is considered to be useful in clinical diagnosis. (author)

  7. Augmented reality for breast imaging.

    Science.gov (United States)

    Rancati, Alberto; Angrigiani, Claudio; Nava, Maurizio B; Catanuto, Giuseppe; Rocco, Nicola; Ventrice, Fernando; Dorr, Julio

    2018-02-21

    Augmented reality (AR) enables the superimposition of virtual reality reconstructions onto clinical images of a real patient, in real time. This allows visualization of internal structures through overlying tissues, thereby providing a virtual transparency vision of surgical anatomy. AR has been applied to neurosurgery, which utilizes a relatively fixed space, frames, and bony references; the application of AR facilitates the relationship between virtual and real data. Augmented Breast imaging (ABI) is described. Breast MRI studies for breast implant patients with seroma were performed using a Siemens 3T system with a body coil and a four-channel bilateral phased-array breast coil as the transmitter and receiver, respectively. The contrast agent used was (CA) gadolinium (Gd) injection (0.1 mmol/kg at 2 ml/s) by a programmable power injector. Dicom formated images data from 10 MRI cases of breast implant seroma and 10 MRI cases with T1-2 N0 M0 breast cancer, were imported and transformed into Augmented reality images. Augmented breast imaging (ABI) demonstrated stereoscopic depth perception, focal point convergence, 3D cursor use, and joystick fly-through. Augmented breast imaging (ABI) to the breast can improve clinical outcomes, giving an enhanced view of the structures to work on. It should be further studied to determine its utility in clinical practice.

  8. Hands-on guide for 3D image creation for geological purposes

    Science.gov (United States)

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red

  9. Analysis of the three-dimensional trajectories of dusts observed with a stereoscopic fast framing camera in the Large Helical Device

    Energy Technology Data Exchange (ETDEWEB)

    Shoji, M., E-mail: shoji@LHD.nifs.ac.jp [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292, Gifu (Japan); Masuzaki, S. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292, Gifu (Japan); Tanaka, Y. [Kanazawa University, Kakuma, Kanazawa 920-1192 (Japan); Pigarov, A.Yu.; Smirnov, R.D. [University of California at San Diego, La Jolla, CA 92093 (United States); Kawamura, G.; Uesugi, Y.; Yamada, H. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292, Gifu (Japan)

    2015-08-15

    The three-dimensional trajectories of dusts have been observed with two stereoscopic fast framing cameras installed in upper and outer viewports in the Large Helical Device (LHD). It shows that the dust trajectories locate in divertor legs and an ergodic layer around the main plasma confinement region. While it is found that most of the dusts approximately move along the magnetic field lines with acceleration, there are some dusts which have sharply curved trajectories crossing over the magnetic field lines. A dust transport simulation code was modified to investigate the dust trajectories in fully three dimensional geometries such as LHD plasmas. It can explain the general trend of most of observed dust trajectories by the effect of the plasma flow in the peripheral plasma. However, the behavior of the some dusts with sharply curved trajectories is not consistent with the simulations.

  10. The Monocular Duke of Urbino.

    Science.gov (United States)

    Schwartz, Stephen G; Leffler, Christopher T; Chavis, Pamela S; Khan, Faraaz; Bermudez, Dennis; Flynn, Harry W

    2016-01-01

    Federico da Montefeltro (1422-1482), the Duke of Urbino, was a well-known historical figure during the Italian Renaissance. He is the subject of a famous painting by Piero della Francesca (1416-1492), which displays the Duke from the left and highlights his oddly shaped nose. The Duke is known to have lost his right eye due to an injury sustained during a jousting tournament, which is why the painting portrays him from the left. Some historians teach that the Duke subsequently underwent nasal surgery to remove tissue from the bridge of his nose in order to expand his visual field in an attempt to compensate for the lost eye. In theory, removal of a piece of the nose may have expanded the nasal visual field, especially the "eye motion visual field" that encompasses eye movements. In addition, removing part of the nose may have reduced some of the effects of ocular parallax. Finally, shifting of the visual egocenter may have occurred, although this seems likely unrelated to the proposed nasal surgery. Whether or not the Duke actually underwent the surgery cannot be proven, but it seems unlikely that this would have substantially improved his visual function.

  11. Object-Oriented Hierarchy Radiation Consistency for Different Temporal and Different Sensor Images

    Directory of Open Access Journals (Sweden)

    Nan Su

    2018-02-01

    Full Text Available In the paper, we propose a novel object-oriented hierarchy radiation consistency method for dense matching of different temporal and different sensor data in the 3D reconstruction. For different temporal images, our illumination consistency method is proposed to solve both the illumination uniformity for a single image and the relative illumination normalization for image pairs. Especially in the relative illumination normalization step, singular value equalization and linear relationship of the invariant pixels is combined used for the initial global illumination normalization and the object-oriented refined illumination normalization in detail, respectively. For different sensor images, we propose the union group sparse method, which is based on improving the original group sparse model. The different sensor images are set to a similar smoothness level by the same threshold of singular value from the union group matrix. Our method comprehensively considered the influence factors on the dense matching of the different temporal and different sensor stereoscopic image pairs to simultaneously improve the illumination consistency and the smoothness consistency. The radiation consistency experimental results verify the effectiveness and superiority of the proposed method by comparing two other methods. Moreover, in the dense matching experiment of the mixed stereoscopic image pairs, our method has more advantages for objects in the urban area.

  12. Binocular contrast-gain control for natural scenes: Image structure and phase alignment.

    Science.gov (United States)

    Huang, Pi-Chun; Dai, Yu-Ming

    2018-05-01

    In the context of natural scenes, we applied the pattern-masking paradigm to investigate how image structure and phase alignment affect contrast-gain control in binocular vision. We measured the discrimination thresholds of bandpass-filtered natural-scene images (targets) under various types of pedestals. Our first experiment had four pedestal types: bandpass-filtered pedestals, unfiltered pedestals, notch-filtered pedestals (which enabled removal of the spatial frequency), and misaligned pedestals (which involved rotation of unfiltered pedestals). Our second experiment featured six types of pedestals: bandpass-filtered, unfiltered, and notch-filtered pedestals, and the corresponding phase-scrambled pedestals. The thresholds were compared for monocular, binocular, and dichoptic viewing configurations. The bandpass-filtered pedestal and unfiltered pedestals showed classic dipper shapes; the dipper shapes of the notch-filtered, misaligned, and phase-scrambled pedestals were weak. We adopted a two-stage binocular contrast-gain control model to describe our results. We deduced that the phase-alignment information influenced the contrast-gain control mechanism before the binocular summation stage and that the phase-alignment information and structural misalignment information caused relatively strong divisive inhibition in the monocular and interocular suppression stages. When the pedestals were phase-scrambled, the elimination of the interocular suppression processing was the most convincing explanation of the results. Thus, our results indicated that both phase-alignment information and similar image structures cause strong interocular suppression. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. VISIDEP™: visual image depth enhancement by parallax induction

    Science.gov (United States)

    Jones, Edwin R.; McLaurin, A. P.; Cathey, LeConte

    1984-05-01

    The usual descriptions of depth perception have traditionally required the simultaneous presentation of disparate views presented to separate eyes with the concomitant demand that the resulting binocular parallax be horizontally aligned. Our work suggests that the visual input information is compared in a short-term memory buffer which permits the brain to compute depth as it is normally perceived. However, the mechanism utilized is also capable of receiving and processing the stereographic information even when it is received monocularly or when identical inputs are simultaneously fed to both eyes. We have also found that the restriction to horizontally displaced images is not a necessary requirement and that improvement in image acceptability is achieved by the use of vertical parallax. Use of these ideas permit the presentation of three-dimensional scenes on flat screens in full color without the encumbrance of glasses or other viewing aids.

  14. New Developments In Particle Image Velocimetry (PIV) For The Study Of Complex Plasmas

    International Nuclear Information System (INIS)

    Thomas, Edward Jr.; Fisher, Ross; Shaw, Joseph; Jefferson, Robert; Cianciosa, Mark; Williams, Jeremiah

    2011-01-01

    Particle Image Velocimetry (PIV) is a fluid measurement technique in which the average displacement of small groups of particles is made by comparing a pair of images that are separated in time by an interval Δt. For over a decade, a several variations of the PIV technique, e.g., two-dimensional, stereoscopic, and tomographic PIV, have been used to characterize particle transport, instabilities, and the thermal properties of complex plasmas. This paper describes the basic principles involved in the PIV analysis technique and discusses potential future applications of PIV to the study of complex plasmas.

  15. Light field moment imaging with the ptychographic iterative engine

    Directory of Open Access Journals (Sweden)

    Zhilong Jiang

    2014-10-01

    Full Text Available The recently developed Light Field Moment Imaging (LMI is adopted to show the stereoscopic structure of the sample studied in Coherent Diffractive Imaging (CDI, where 3D image were always generated with complicated experimental procedure such as the rotation of the sample and time-consuming computation. The animation of large view angle can be generated with LMI very quickly, and the 3D structure of sample can be shown vividly. This method can find many applications for the coherent diffraction imaging with x-ray and electron beams, where a glimpse of the hierarchical structure required and the quick and simple 3D view of object is sufficient. The feasibility of this method is demonstrated theoretically and experimentally with a recently developed CDI method called Ptychographic Iterative Engine.

  16. Study of Three-Dimensional Image Brightness Loss in Stereoscopy

    Directory of Open Access Journals (Sweden)

    Hsing-Cheng Yu

    2015-10-01

    Full Text Available When viewing three-dimensional (3D images, whether in cinemas or on stereoscopic televisions, viewers experience the same problem of image brightness loss. This study aims to investigate image brightness loss in 3D displays, with the primary aim being to quantify the image brightness degradation in the 3D mode. A further aim is to determine the image brightness relationship to the corresponding two-dimensional (2D images in order to adjust the 3D-image brightness values. In addition, the photographic principle is used in this study to measure metering values by capturing 2D and 3D images on television screens. By analyzing these images with statistical product and service solutions (SPSS software, the image brightness values can be estimated using the statistical regression model, which can also indicate the impact of various environmental factors or hardware on the image brightness. In analysis of the experimental results, comparison of the image brightness between 2D and 3D images indicates 60.8% degradation in the 3D image brightness amplitude. The experimental values, from 52.4% to 69.2%, are within the 95% confidence interval

  17. Image, Image, Image

    Science.gov (United States)

    Howell, Robert T.

    2004-01-01

    With all the talk today about accountability, budget cuts, and the closing of programs in public education, teachers cannot overlook the importance of image in the field of industrial technology. It is very easy for administrators to cut ITE (industrial technology education) programs to save school money--money they might shift to teaching the…

  18. Layer 2/3 synapses in monocular and binocular regions of tree shrew visual cortex express mAChR-dependent long-term depression and long-term potentiation.

    Science.gov (United States)

    McCoy, Portia; Norton, Thomas T; McMahon, Lori L

    2008-07-01

    Acetylcholine is an important modulator of synaptic efficacy and is required for learning and memory tasks involving the visual cortex. In rodent visual cortex, activation of muscarinic acetylcholine receptors (mAChRs) induces a persistent long-term depression (LTD) of transmission at synapses recorded in layer 2/3 of acute slices. Although the rodent studies expand our knowledge of how the cholinergic system modulates synaptic function underlying learning and memory, they are not easily extrapolated to more complex visual systems. Here we used tree shrews for their similarities to primates, including a visual cortex with separate, defined regions of monocular and binocular innervation, to determine whether mAChR activation induces long-term plasticity. We find that the cholinergic agonist carbachol (CCh) not only induces long-term plasticity, but the direction of the plasticity depends on the subregion. In the monocular region, CCh application induces LTD of the postsynaptic potential recorded in layer 2/3 that requires activation of m3 mAChRs and a signaling cascade that includes activation of extracellular signal-regulated kinase (ERK) 1/2. In contrast, layer 2/3 postsynaptic potentials recorded in the binocular region express long-term potentiation (LTP) following CCh application that requires activation of m1 mAChRs and phospholipase C. Our results show that activation of mAChRs induces long-term plasticity at excitatory synapses in tree shrew visual cortex. However, depending on the ocular inputs to that region, variation exists as to the direction of plasticity, as well as to the specific mAChR and signaling mechanisms that are required.

  19. Investigation of 1 : 1,000 Scale Map Generation by Stereo Plotting Using Uav Images

    Science.gov (United States)

    Rhee, S.; Kim, T.

    2017-08-01

    Large scale maps and image mosaics are representative geospatial data that can be extracted from UAV images. Map drawing using UAV images can be performed either by creating orthoimages and digitizing them, or by stereo plotting. While maps generated by digitization may serve the need for geospatial data, many institutions and organizations require map drawing using stereoscopic vision on stereo plotting systems. However, there are several aspects to be checked for UAV images to be utilized for stereo plotting. The first aspect is the accuracy of exterior orientation parameters (EOPs) generated through automated bundle adjustment processes. It is well known that GPS and IMU sensors mounted on a UAV are not very accurate. It is necessary to adjust initial EOPs accurately using tie points. For this purpose, we have developed a photogrammetric incremental bundle adjustment procedure. The second aspect is unstable shooting conditions compared to aerial photographing. Unstable image acquisition may bring uneven stereo coverage, which will result in accuracy loss eventually. Oblique stereo pairs will create eye fatigue. The third aspect is small coverage of UAV images. This aspect will raise efficiency issue for stereo plotting of UAV images. More importantly, this aspect will make contour generation from UAV images very difficult. This paper will discuss effects relate to these three aspects. In this study, we tried to generate 1 : 1,000 scale map from the dataset using EOPs generated from software developed in-house. We evaluated Y-disparity of the tie points extracted automatically through the photogrammetric incremental bundle adjustment process. We could confirm that stereoscopic viewing is possible. Stereoscopic plotting work was carried out by a professional photogrammetrist. In order to analyse the accuracy of the map drawing using stereoscopic vision, we compared the horizontal and vertical position difference between adjacent models after drawing a specific

  20. INVESTIGATION OF 1 : 1,000 SCALE MAP GENERATION BY STEREO PLOTTING USING UAV IMAGES

    Directory of Open Access Journals (Sweden)

    S. Rhee

    2017-08-01

    Full Text Available Large scale maps and image mosaics are representative geospatial data that can be extracted from UAV images. Map drawing using UAV images can be performed either by creating orthoimages and digitizing them, or by stereo plotting. While maps generated by digitization may serve the need for geospatial data, many institutions and organizations require map drawing using stereoscopic vision on stereo plotting systems. However, there are several aspects to be checked for UAV images to be utilized for stereo plotting. The first aspect is the accuracy of exterior orientation parameters (EOPs generated through automated bundle adjustment processes. It is well known that GPS and IMU sensors mounted on a UAV are not very accurate. It is necessary to adjust initial EOPs accurately using tie points. For this purpose, we have developed a photogrammetric incremental bundle adjustment procedure. The second aspect is unstable shooting conditions compared to aerial photographing. Unstable image acquisition may bring uneven stereo coverage, which will result in accuracy loss eventually. Oblique stereo pairs will create eye fatigue. The third aspect is small coverage of UAV images. This aspect will raise efficiency issue for stereo plotting of UAV images. More importantly, this aspect will make contour generation from UAV images very difficult. This paper will discuss effects relate to these three aspects. In this study, we tried to generate 1 : 1,000 scale map from the dataset using EOPs generated from software developed in-house. We evaluated Y-disparity of the tie points extracted automatically through the photogrammetric incremental bundle adjustment process. We could confirm that stereoscopic viewing is possible. Stereoscopic plotting work was carried out by a professional photogrammetrist. In order to analyse the accuracy of the map drawing using stereoscopic vision, we compared the horizontal and vertical position difference between adjacent models after

  1. Two wide-angle imaging neutral-atom spectrometers

    Energy Technology Data Exchange (ETDEWEB)

    McComas, D.J.

    1997-12-31

    The Two Wide-angle Imaging Neutral-atom Spectrometers (TWINS) mission provides a new capability for stereoscopically imaging the magnetosphere. By imaging the charge exchange neutral atoms over a broad energy range (1 < E , {approximately} 100 keV) using two identical instruments on two widely-spaced high-altitude, high-inclination spacecraft, TWINS will enable the 3-dimensional visualization and the resolution of large scale structures and dynamics within the magnetosphere for the first time. These observations will provide a leap ahead in the understanding of the global aspects of the terrestrial magnetosphere and directly address a number of critical issues in the ``Sun-Earth Connections`` science theme of the NASA Office of Space Science.

  2. Two wide-angle imaging neutral-atom spectrometers

    International Nuclear Information System (INIS)

    McComas, D.J.

    1997-01-01

    The Two Wide-angle Imaging Neutral-atom Spectrometers (TWINS) mission provides a new capability for stereoscopically imaging the magnetosphere. By imaging the charge exchange neutral atoms over a broad energy range (1 < E , ∼ 100 keV) using two identical instruments on two widely-spaced high-altitude, high-inclination spacecraft, TWINS will enable the 3-dimensional visualization and the resolution of large scale structures and dynamics within the magnetosphere for the first time. These observations will provide a leap ahead in the understanding of the global aspects of the terrestrial magnetosphere and directly address a number of critical issues in the ''Sun-Earth Connections'' science theme of the NASA Office of Space Science

  3. Project NANO (nanoscience and nanotechnology outreach): a STEM training program that brings SEM's and stereoscopes into high-school and middle-school classrooms

    Science.gov (United States)

    Cady, Sherry L.; Blok, Mikel; Grosse, Keith; Wells, Jennifer

    2014-09-01

    The program Project NANO (Nanoscience and Nanotechnology Outreach) enables middle and high school students to discover and research submicroscopic phenomena in a new and exciting way with the use of optical and scanning electron microscopes in the familiar surroundings of their middle or high school classrooms. Project NANO provides secondary level professional development workshops, support for classroom instruction and teacher curriculum development, and the means to deliver Project NANO toolkits (SEM, stereoscope, computer, supplies) to classrooms with Project NANO trained teachers. Evaluation surveys document the impact of the program on student's attitudes toward science and technology and on the learning outcomes for secondary level teachers. Project NANO workshops (offered for professional development credit) enable teachers to gain familiarity using and teaching with the SEM. Teachers also learn to integrate new content knowledge and skills into topic-driven, standards-based units of instruction specifically designed to support the development of students' higher order thinking skills that include problem solving and evidence-based thinking. The Project NANO management team includes a former university science faculty, two high school science teachers, and an educational researcher. To date, over 7500 students have experienced the impact of the Project NANO program, which provides an exciting and effective model for engaging students in the discovery of nanoscale phenomena and concepts in a fun and engaging way.

  4. Three dimensional visualization of medical images

    International Nuclear Information System (INIS)

    Suto, Yasuzo

    1992-01-01

    Three dimensional visualization is a stereoscopic technique that allows the diagnosis and treatment of complicated anatomy site of the bone and organ. In this article, the current status and technical application of three dimensional visualization are introduced with special reference to X-ray CT and MRI. The surface display technique is the most common for three dimensional visualization, consisting of geometric model, voxel element, and stereographic composition techniques. Recent attention has been paid to display method of the content of the subject called as volume rendering, whereby information on the living body is provided accurately. The application of three dimensional visualization is described in terms of diagnostic imaging and surgical simulation. (N.K.)

  5. Image-guidance for surgical procedures

    International Nuclear Information System (INIS)

    Peters, Terry M

    2006-01-01

    Contemporary imaging modalities can now provide the surgeon with high quality three- and four-dimensional images depicting not only normal anatomy and pathology, but also vascularity and function. A key component of image-guided surgery (IGS) is the ability to register multi-modal pre-operative images to each other and to the patient. The other important component of IGS is the ability to track instruments in real time during the procedure and to display them as part of a realistic model of the operative volume. Stereoscopic, virtual- and augmented-reality techniques have been implemented to enhance the visualization and guidance process. For the most part, IGS relies on the assumption that the pre-operatively acquired images used to guide the surgery accurately represent the morphology of the tissue during the procedure. This assumption may not necessarily be valid, and so intra-operative real-time imaging using interventional MRI, ultrasound, video and electrophysiological recordings are often employed to ameliorate this situation. Although IGS is now in extensive routine clinical use in neurosurgery and is gaining ground in other surgical disciplines, there remain many drawbacks that must be overcome before it can be employed in more general minimally-invasive procedures. This review overviews the roots of IGS in neurosurgery, provides examples of its use outside the brain, discusses the infrastructure required for successful implementation of IGS approaches and outlines the challenges that must be overcome for IGS to advance further. (topical review)

  6. Chronic imaging through "transparent skull" in mice.

    Directory of Open Access Journals (Sweden)

    Anna Steinzeig

    Full Text Available Growing interest in long-term visualization of cortical structure and function requires methods that allow observation of an intact cortex in longitudinal imaging studies. Here we describe a detailed protocol for the "transparent skull" (TS preparation based on skull clearing with cyanoacrylate, which is applicable for long-term imaging through the intact skull in mice. We characterized the properties of the TS in imaging of intrinsic optical signals and compared them with the more conventional cranial window preparation. Our results show that TS is less invasive, maintains stabile transparency for at least two months, and compares favorably to data obtained from the conventional cranial window. We applied this method to experiments showing that a four-week treatment with the antidepressant fluoxetine combined with one week of monocular deprivation induced a shift in ocular dominance in the mouse visual cortex, confirming that fluoxetine treatment restores critical-period-like plasticity. Our results demonstrate that the TS preparation could become a useful method for long-term visualization of the living mouse brain.

  7. Dual source and dual detector arrays tetrahedron beam computed tomography for image guided radiotherapy

    International Nuclear Information System (INIS)

    Kim, Joshua; Zhang, Tiezhi; Lu, Weiguo

    2014-01-01

    Cone-beam computed tomography (CBCT) is an important online imaging modality for image guided radiotherapy. But suboptimal image quality and the lack of a real-time stereoscopic imaging function limit its implementation in advanced treatment techniques, such as online adaptive and 4D radiotherapy. Tetrahedron beam computed tomography (TBCT) is a novel online imaging modality designed to improve on the image quality provided by CBCT. TBCT geometry is flexible, and multiple detector and source arrays can be used for different applications. In this paper, we describe a novel dual source–dual detector TBCT system that is specially designed for LINAC radiation treatment machines. The imaging system is positioned in-line with the MV beam and is composed of two linear array x-ray sources mounted aside the electrical portal imaging device and two linear arrays of x-ray detectors mounted below the machine head. The detector and x-ray source arrays are orthogonal to each other, and each pair of source and detector arrays forms a tetrahedral volume. Four planer images can be obtained from different view angles at each gantry position at a frame rate as high as 20 frames per second. The overlapped regions provide a stereoscopic field of view of approximately 10–15 cm. With a half gantry rotation, a volumetric CT image can be reconstructed having a 45 cm field of view. Due to the scatter rejecting design of the TBCT geometry, the system can potentially produce high quality 2D and 3D images with less radiation exposure. The design of the dual source–dual detector system is described, and preliminary results of studies performed on numerical phantoms and simulated patient data are presented. (paper)

  8. Dual source and dual detector arrays tetrahedron beam computed tomography for image guided radiotherapy

    Science.gov (United States)

    Kim, Joshua; Lu, Weiguo; Zhang, Tiezhi

    2014-02-01

    Cone-beam computed tomography (CBCT) is an important online imaging modality for image guided radiotherapy. But suboptimal image quality and the lack of a real-time stereoscopic imaging function limit its implementation in advanced treatment techniques, such as online adaptive and 4D radiotherapy. Tetrahedron beam computed tomography (TBCT) is a novel online imaging modality designed to improve on the image quality provided by CBCT. TBCT geometry is flexible, and multiple detector and source arrays can be used for different applications. In this paper, we describe a novel dual source-dual detector TBCT system that is specially designed for LINAC radiation treatment machines. The imaging system is positioned in-line with the MV beam and is composed of two linear array x-ray sources mounted aside the electrical portal imaging device and two linear arrays of x-ray detectors mounted below the machine head. The detector and x-ray source arrays are orthogonal to each other, and each pair of source and detector arrays forms a tetrahedral volume. Four planer images can be obtained from different view angles at each gantry position at a frame rate as high as 20 frames per second. The overlapped regions provide a stereoscopic field of view of approximately 10-15 cm. With a half gantry rotation, a volumetric CT image can be reconstructed having a 45 cm field of view. Due to the scatter rejecting design of the TBCT geometry, the system can potentially produce high quality 2D and 3D images with less radiation exposure. The design of the dual source-dual detector system is described, and preliminary results of studies performed on numerical phantoms and simulated patient data are presented.

  9. The effect of image position on the Independent Components of natural binocular images.

    Science.gov (United States)

    Hunter, David W; Hibbard, Paul B

    2018-01-11

    Human visual performance degrades substantially as the angular distance from the fovea increases. This decrease in performance is found for both binocular and monocular vision. Although analysis of the statistics of natural images has provided significant insights into human visual processing, little research has focused on the statistical content of binocular images at eccentric angles. We applied Independent Component Analysis to rectangular image patches cut from locations within binocular images corresponding to different degrees of eccentricity. The distribution of components learned from the varying locations was examined to determine how these distributions varied across eccentricity. We found a general trend towards a broader spread of horizontal and vertical position disparity tunings in eccentric regions compared to the fovea, with the horizontal spread more pronounced than the vertical spread. Eccentric locations above the centroid show a strong bias towards far-tuned components, eccentric locations below the centroid show a strong bias towards near-tuned components. These distributions exhibit substantial similarities with physiological measurements in V1, however in common with previous research we also observe important differences, in particular distributions of binocular phase disparity which do not match physiology.

  10. Macro-carriers of plastic deformation of steel surface layers detected by digital image correlation

    Energy Technology Data Exchange (ETDEWEB)

    Kopanitsa, D. G., E-mail: kopanitsa@mail.ru; Ustinov, A. M., E-mail: artemustinov@mail.ru [Tomsk State University of Architecture and Building, 2 Solyanaya Sq, Tomsk, 634003 (Russian Federation); Potekaev, A. I., E-mail: potekaev@spti.tsu.ru [National Research Tomsk State University, 36 Lenin Ave., Tomsk, 634050 (Russian Federation); Klopotov, A. A., E-mail: klopotovaa@tsuab.ru [Tomsk State University of Architecture and Building, 2 Solyanaya Sq, Tomsk, 634003 (Russian Federation); National Research Tomsk State University, 36 Lenin Ave., Tomsk, 634050 (Russian Federation); Kopanitsa, G. D., E-mail: georgy.kopanitsa@mail.com [National Research Tomsk Polytechnic University, 30 Lenin Ave., Tomsk, 634050 (Russian Federation)

    2016-01-15

    This paper presents a study of characteristics of an evolution of deformation fields in surface layers of medium-carbon low-alloy specimens under compression. The experiments were performed on the “Universal Testing Machine 4500” using a digital stereoscopic image processing system Vic-3D. A transition between stages is reflected as deformation redistribution on the near-surface layers. Electronic microscopy shows that the structure of the steel is a mixture of pearlite and ferrite grains. A proportion of pearlite is 40% and ferrite is 60%.

  11. 2D and 3D stereoscopic videos used as pre-anatomy lab tools improve students' examination performance in a veterinary gross anatomy course.

    Science.gov (United States)

    Al-Khalili, Sereen M; Coppoc, Gordon L

    2014-01-01

    The hypothesis for the research described in this article was that viewing an interactive two-dimensional (2D) or three-dimensional (3D) stereoscopic pre-laboratory video would improve efficiency and learning in the laboratory. A first-year DVM class was divided into 21 dissection teams of four students each. Primary variables were method of preparation (2D, 3D, or laboratory manual) and dissection region (thorax, abdomen, or pelvis). Teams were randomly assigned to a group (A, B, or C) in a crossover design experiment so that all students experienced each of the modes of preparation, but with different regions of the canine anatomy. All students were instructed to study normal course materials and the laboratory manual, the Guide, before coming to the laboratory session and to use them during the actual dissection as usual. Video groups were given a DVD with an interactive 10-12 minute video to view for the first 30 minutes of the laboratory session, while non-video groups were instructed to review the Guide. All groups were allowed 45 minutes to dissect the assigned section and find a list of assigned structures, after which all groups took a post-dissection quiz and attitudinal survey. The 2D groups performed better than the Guide groups (p=.028) on the post-dissection quiz, despite the fact that only a minority of the 2D-group students studied the Guide as instructed. There was no significant difference (p>.05) between 2D and 3D groups on the post-dissection quiz. Students preferred videos over the Guide.

  12. Two wide-angle imaging neutral-atom spectrometers (TWINS)

    International Nuclear Information System (INIS)

    McComas, D.J.; Blake, B.; Burch, J.

    1998-01-01

    Two Wide-angle Imaging Neutral-atom Spectrometers (TWINS) is a revolutionary new mission designed to stereoscopically image the magnetosphere in charge exchange neutral atoms for the first time. The authors propose to fly two identical TWINS instruments as a mission of opportunity on two widely-spaced high-altitude, high-inclination US Government spacecraft. Because the spacecraft are funded independently, TWINS can provide a vast quantity of high priority science observations (as identified in an ongoing new missions concept study and the Sun-Earth Connections Roadmap) at a small fraction of the cost of a dedicated mission. Because stereo observations of the near-Earth space environs will provide a particularly graphic means for visualizing the magnetosphere in action, and because of the dedication and commitment of the investigator team to the principles of carrying space science to the broader audience, TWINS will also be an outstanding tool for public education and outreach

  13. Criteria for the optimal selection of remote sensing optical images to map event landslides

    Science.gov (United States)

    Fiorucci, Federica; Giordan, Daniele; Santangelo, Michele; Dutto, Furio; Rossi, Mauro; Guzzetti, Fausto

    2018-01-01

    Landslides leave discernible signs on the land surface, most of which can be captured in remote sensing images. Trained geomorphologists analyse remote sensing images and map landslides through heuristic interpretation of photographic and morphological characteristics. Despite a wide use of remote sensing images for landslide mapping, no attempt to evaluate how the image characteristics influence landslide identification and mapping exists. This paper presents an experiment to determine the effects of optical image characteristics, such as spatial resolution, spectral content and image type (monoscopic or stereoscopic), on landslide mapping. We considered eight maps of the same landslide in central Italy: (i) six maps obtained through expert heuristic visual interpretation of remote sensing images, (ii) one map through a reconnaissance field survey, and (iii) one map obtained through a real-time kinematic (RTK) differential global positioning system (dGPS) survey, which served as a benchmark. The eight maps were compared pairwise and to a benchmark. The mismatch between each map pair was quantified by the error index, E. Results show that the map closest to the benchmark delineation of the landslide was obtained using the higher resolution image, where the landslide signature was primarily photographical (in the landslide source and transport area). Conversely, where the landslide signature was mainly morphological (in the landslide deposit) the best mapping result was obtained using the stereoscopic images. Albeit conducted on a single landslide, the experiment results are general, and provide useful information to decide on the optimal imagery for the production of event, seasonal and multi-temporal landslide inventory maps.

  14. Sci-Thur AM: YIS – 07: Optimizing dual-energy x-ray parameters using a single filter for both high and low-energy images to enhance soft-tissue imaging

    Energy Technology Data Exchange (ETDEWEB)

    Bowman, Wesley; Sattarivand, Mike [Department of Radiation Oncology, Dalhousie University at Nova Scotia Health Authority, Department of Radiation Oncology, Dalhousie University at Nova Scotia Health Authority (Canada)

    2016-08-15

    Objective: To optimize dual-energy parameters of ExacTrac stereoscopic x-ray imaging system for lung SBRT patients Methods: Simulated spectra and a lung phantom were used to optimize filter material, thickness, kVps, and weighting factors to obtain bone subtracted dual-energy images. Spektr simulations were used to identify material in the atomic number (Z) range [3–83] based on a metric defined to separate spectrums of high and low energies. Both energies used the same filter due to time constraints of image acquisition in lung SBRT imaging. A lung phantom containing bone, soft tissue, and a tumor mimicking material was imaged with filter thicknesses range [0–1] mm and kVp range [60–140]. A cost function based on contrast-to-noise-ratio of bone, soft tissue, and tumor, as well as image noise content, was defined to optimize filter thickness and kVp. Using the optimized parameters, dual-energy images of anthropomorphic Rando phantom were acquired and evaluated for bone subtraction. Imaging dose was measured with dual-energy technique using tin filtering. Results: Tin was the material of choice providing the best energy separation, non-toxicity, and non-reactiveness. The best soft-tissue-only image in the lung phantom was obtained using 0.3 mm tin and [140, 80] kVp pair. Dual-energy images of the Rando phantom had noticeable bone elimination when compared to no filtration. Dose was lower with tin filtering compared to no filtration. Conclusions: Dual-energy soft-tissue imaging is feasible using ExacTrac stereoscopic imaging system utilizing a single tin filter for both high and low energies and optimized acquisition parameters.

  15. Sci-Thur AM: YIS – 07: Optimizing dual-energy x-ray parameters using a single filter for both high and low-energy images to enhance soft-tissue imaging

    International Nuclear Information System (INIS)

    Bowman, Wesley; Sattarivand, Mike

    2016-01-01

    Objective: To optimize dual-energy parameters of ExacTrac stereoscopic x-ray imaging system for lung SBRT patients Methods: Simulated spectra and a lung phantom were used to optimize filter material, thickness, kVps, and weighting factors to obtain bone subtracted dual-energy images. Spektr simulations were used to identify material in the atomic number (Z) range [3–83] based on a metric defined to separate spectrums of high and low energies. Both energies used the same filter due to time constraints of image acquisition in lung SBRT imaging. A lung phantom containing bone, soft tissue, and a tumor mimicking material was imaged with filter thicknesses range [0–1] mm and kVp range [60–140]. A cost function based on contrast-to-noise-ratio of bone, soft tissue, and tumor, as well as image noise content, was defined to optimize filter thickness and kVp. Using the optimized parameters, dual-energy images of anthropomorphic Rando phantom were acquired and evaluated for bone subtraction. Imaging dose was measured with dual-energy technique using tin filtering. Results: Tin was the material of choice providing the best energy separation, non-toxicity, and non-reactiveness. The best soft-tissue-only image in the lung phantom was obtained using 0.3 mm tin and [140, 80] kVp pair. Dual-energy images of the Rando phantom had noticeable bone elimination when compared to no filtration. Dose was lower with tin filtering compared to no filtration. Conclusions: Dual-energy soft-tissue imaging is feasible using ExacTrac stereoscopic imaging system utilizing a single tin filter for both high and low energies and optimized acquisition parameters.

  16. Toward a Global Bundle Adjustment of SPOT 5 - HRS Images

    Science.gov (United States)

    Massera, S.; Favé, P.; Gachet, R.; Orsoni, A.

    2012-07-01

    The HRS (High Resolution Stereoscopic) instrument carried on SPOT 5 enables quasi-simultaneous acquisition of stereoscopic images on wide segments - 120 km wide - with two forward and backward-looking telescopes observing the Earth with an angle of 20° ahead and behind the vertical. For 8 years IGN (Institut Géographique National) has been developing techniques to achieve spatiotriangulation of these images. During this time the capacities of bundle adjustment of SPOT 5 - HRS spatial images have largely improved. Today a global single block composed of about 20,000 images can be computed in reasonable calculation time. The progression was achieved step by step: first computed blocks were only composed of 40 images, then bigger blocks were computed. Finally only one global block is now computed. In the same time calculation tools have improved: for example the adjustment of 2,000 images of North Africa takes about 2 minutes whereas 8 hours were needed two years ago. To reach such a result a new independent software was developed to compute fast and efficient bundle adjustments. In the same time equipment - GCPs (Ground Control Points) and tie points - and techniques have also evolved over the last 10 years. Studies were made to get recommendations about the equipment in order to make an accurate single block. Tie points can now be quickly and automatically computed with SURF (Speeded Up Robust Features) techniques. Today the updated equipment is composed of about 500 GCPs and studies show that the ideal configuration is around 100 tie points by square degree. With such an equipment, the location of the global HRS block becomes a few meters accurate whereas non adjusted images are only 15 m accurate. This paper will describe the methods used in IGN Espace to compute a global single block composed of almost 20,000 HRS images, 500 GCPs and several million of tie points in reasonable calculation time. Many advantages can be found to use such a block. Because the

  17. D3D augmented reality imaging system: proof of concept in mammography.

    Science.gov (United States)

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called "depth 3-dimensional (D3D) augmented reality". A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice.

  18. An application of stereoscopy and image processing in forensics: recovering obliterated firearms serial number

    Science.gov (United States)

    da Silva Nunes, L. C.; dos Santos, Paulo Acioly M.

    2004-10-01

    We present an application of the use of stereoscope to recovering obliterated firearms serial number. We investigate a promising new combined cheap method using both non-destructive and destructive techniques. With the use of a stereomicroscope coupled with a digital camera and a flexible cold light source, we can capture the image of the damaged area, and with continuous polishing and sometimes with the help of image processing techniques we could enhance the observed images and they can also be recorded as evidence. This method has already proven to be useful, in certain cases, in aluminum dotted pistol frames, whose serial number is printed with a laser, when etching techniques are not successful. We can also observe acid treated steel surfaces and enhance the images of recovered serial numbers, which sometimes lack of definition.

  19. Three-dimensional image signals: processing methods

    Science.gov (United States)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  20. Autostereoscopic image creation by hyperview matrix controlled single pixel rendering

    Science.gov (United States)

    Grasnick, Armin

    2017-06-01

    Just as the increasing awareness level of the stereoscopic cinema, so the perception of limitations while watching movies with 3D glasses has been emerged as well. It is not only that the additional glasses are uncomfortable and annoying; there are some tangible arguments for avoiding 3D glasses. These "stereoscopic deficits" are caused by the 3D glasses itself. In contrast to natural viewing with naked eyes, the artificial 3D viewing with 3D glasses introduces specific "unnatural" side effects. The most of the moviegoers has experienced unspecific discomfort in 3D cinema, which they may have associated with insufficient image quality. Obviously, quality problems with 3D glasses can be solved by technical improvement. But this simple answer can -and already has- mislead some decision makers to relax on the existing 3D glasses solution. It needs to be underlined, that there are inherent difficulties with the glasses, which can never be solved with modest advancement; as the 3D glasses initiate them. To overcome the limitations of stereoscopy in display applications, several technologies has been proposed to create a 3D impression without the need of 3D glasses, known as autostereoscopy. But even todays autostereoscopic displays cannot solve all viewing problems and still show limitations. A hyperview display could be a suitable candidate, if it would be possible to create an affordable device and generate the necessary content in an acceptable time frame. All autostereoscopic displays, based on the idea of lightfield, integral photography or super-multiview could be unified within the concept of hyperview. It is essential for functionality that every of these display technologies uses numerous of different perspective images to create the 3D impression. Such a calculation of a very high number of views will require much more computing time as for the formation of a simple stereoscopic image pair. The hyperview concept allows to describe the screen image of any 3D

  1. X-ray imaging for security applications

    Science.gov (United States)

    Evans, J. Paul

    2004-01-01

    The X-ray screening of luggage by aviation security personnel may be badly hindered by the lack of visual cues to depth in an image that has been produced by transmitted radiation. Two-dimensional "shadowgraphs" with "organic" and "metallic" objects encoded using two different colors (usually orange and blue) are still in common use. In the context of luggage screening there are no reliable cues to depth present in individual shadowgraph X-ray images. Therefore, the screener is required to convert the 'zero depth resolution' shadowgraph into a three-dimensional mental picture to be able to interpret the relative spatial relationship of the objects under inspection. Consequently, additional cognitive processing is required e.g. integration, inference and memory. However, these processes can lead to serious misinterpretations of the actual physical structure being examined. This paper describes the development of a stereoscopic imaging technique enabling the screener to utilise binocular stereopsis and kinetic depth to enhance their interpretation of the actual nature of the objects under examination. Further work has led to the development of a technique to combine parallax data (to calculate the thickness of a target material) with the results of a basis material subtraction technique to approximate the target's effective atomic number and density. This has been achieved in preliminary experiments with a novel spatially interleaved dual-energy sensor which reduces the number of scintillation elements required by 50% in comparison to conventional sensor configurations.

  2. Virtual reality and stereoscopic telepresence

    International Nuclear Information System (INIS)

    Mertens, E.P.

    1994-12-01

    Virtual reality technology is commonly thought to have few, if any, applications beyond the national research laboratories, the aerospace industry, and the entertainment world. A team at Westinghouse Hanford Company (WHC) is developing applications for virtual reality technology that make it a practical, viable, portable, and cost-effective business and training tool. The technology transfer is particularly applicable to the waste management industry and has become a tool that can serve the entire work force spectrum, from industrial sites to business offices. For three and a half years, a small team of WHC personnel has been developing an effective and practical method of bringing virtual reality technology to the job site. The applications are practical, the results are repeatable, and the equipment costs are within the range of present-day office machines. That combination can evolve into a competitive advantage for commercial business interests. The WHC team has contained system costs by using commercially available equipment and personal computers to create effective virtual reality work stations for less than $20,000

  3. Computer-based endoscopic image-processing technology for endourology and laparoscopic surgery

    International Nuclear Information System (INIS)

    Igarashi, Tatsuo; Suzuki, Hiroyoshi; Naya, Yukio

    2009-01-01

    Endourology and laparoscopic surgery are evolving in accordance with developments in instrumentation and progress in surgical technique. Recent advances in computer and image-processing technology have enabled novel images to be created from conventional endoscopic and laparoscopic video images. Such technology harbors the potential to advance endourology and laparoscopic surgery by adding new value and function to the endoscope. The panoramic and three-dimensional images created by computer processing are two outstanding features that can address the shortcomings of conventional endoscopy and laparoscopy, such as narrow field of view, lack of depth cue, and discontinuous information. The wide panoramic images show an anatomical map' of the abdominal cavity and hollow organs with high brightness and resolution, as the images are collected from video images taken in a close-up manner. To assist in laparoscopic surgery, especially in suturing, a three-dimensional movie can be obtained by enhancing movement parallax using a conventional monocular laparoscope. In tubular organs such as the prostatic urethra, reconstruction of three-dimensional structure can be achieved, implying the possibility of a liquid dynamic model for assessing local urethral resistance in urination. Computer-based processing of endoscopic images will establish new tools for endourology and laparoscopic surgery in the near future. (author)

  4. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.

    Science.gov (United States)

    Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.

  5. Safety Assessment of Wearing the AN/PVS-14 Monocular Night Vision Device (MNVD) and AN/AVS-6 Aviators' Night Vision Imaging System (ANVIS) During 5-Ton and HMMWV Night Driving

    National Research Council Canada - National Science Library

    Redden, Elizabeth

    2002-01-01

    .... The Communications-Electronics Command Directorate for Safety Risk Management, Fort Monmouth, New Jersey, will use the results of the assessment to determine the suitability of both devices for driving...

  6. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    Science.gov (United States)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  7. STRIPE: Remote Driving Using Limited Image Data

    Science.gov (United States)

    Kay, Jennifer S.

    1997-01-01

    prevention of disorientation, a common problem across all types of teleoperation systems. STRIPE is the only semi-autonomous teleoperation system that can accurately follow paths designated in monocular images on varying terrain. The thesis describes the STRIPE algorithm for tracking points using the incremental geometry model, insight into the design and redesign of the interface, an analysis of the effects of potential errors, details of the user studies, and hints on how to improve both the algorithm and interface for future designs.

  8. Performance test on 2-dimensional PIV and 3-dimensional PIV using standard images

    International Nuclear Information System (INIS)

    Hwang, Tae Gyu; Doh, Deog Hee

    2004-01-01

    Quantitative performance test on the conventional 2D-PIV and the hybrid angular 3D-PIV (Stereoscopic PIV) was carried out. LES Data sets on an impinging jet which are provided on the webpage(http://www.vsj.or.jp/piv) for the PIV Standard Project were used for the generation of virtual images. The generated virtual images were used for the 2D-PIV and 3D-PIV measurements test. It has been shown that the results obtained by 2D-PIV on average values are slightly closer to the LES data than those obtained by 3D-PIV, but the turbulent properties obtained by 2D-PIV are largely underestimated than those obtained by 3D-PIV

  9. Three-dimensional image capturing and representation for multimedia ambiance communication

    Science.gov (United States)

    Ichikawa, Tadashi; Iwasawa, Shoichiro; Yamada, Kunio; Kanamaru, Toshifumi; Naemura, Takeshi; Aizawa, Kiyoharu; Morishima, Shigeo; Saito, Takahiro

    2001-02-01

    Multimedia Ambiance Communication is as a means of achieving shared-space communication in an immersive environment consisting of an arch-type stereoscopic projection display. Our goal is to enable shared-space communication by creating a photo-realistic three-dimensional (3D) image space that users can feel a part of. The concept of a layered structure defined for painting, such as long-range, mid-range, and short-range views, can be applied to a 3D image space. New techniques, such as two-plane expression, high quality panorama image generation and setting representation for image processing, 3D image representation and generation for photo- realistic 3D image space have been developed. Also, we propose a life-like avatar within the 3D image space. To obtain the characteristics of user's body, a human subject is scanned using a CyberwareTM whole body scanner. The output from the scanner, a range image, is a good start for modeling the avatar's geometric shape. A generic human surface model is fitted to the range image. The obtained model is topologically equivalent even if our method is applied to another subject. If a generic model with motion definitions is employed, and common motion rules can be applied to all models made from the generic model.

  10. On so-called paradoxical monocular stereoscopy.

    NARCIS (Netherlands)

    Koenderink, Jan J.; van Doorn, Andrea J.; Kappers, A. M.

    1994-01-01

    Human observers are apparently well able to judge properties of 'three-dimensional objects' on the basis of flat pictures such as photographs of physical objects. They obtain this 'pictorial relief' without much conscious effort and with little interference from the (flat) picture surface. Methods

  11. Monocular pedestrian detection: Survey and experiments

    NARCIS (Netherlands)

    Enzweiler, M.; Gavrila, D.M.

    2009-01-01

    Pedestrian detection is a rapidly evolving area in computer vision with key applications in intelligent vehicles, surveillance, and advanced robotics. The objective of this paper is to provide an overview of the current state of the art from both methodological and experimental perspectives. The

  12. A monocular, unconscious form of visual attention

    NARCIS (Netherlands)

    Self, M.W.; Roelfsema, P.R.

    2010-01-01

    Sudden changes in our visual field capture our attention so that we are faster and more accurate in our responses to that region of space. The underlying mechanisms by which these behavioral improvements occur are unknown. Here we investigate the level of the visual system at which attentional

  13. What is Stereopsis?

    Directory of Open Access Journals (Sweden)

    D Vishwanath

    2012-07-01

    Full Text Available “Stereopsis” refers to the characteristically vivid qualitative impression of 3D structure that is observed when real (or simulated-3D scenes are viewed binocularly. Stereopsis is associated with a compelling perception of solidity or 3-dimensionality, a clear sense of space between objects, and a phenomenal sense of realism. These visual characteristics are conventionally thought to be a result of the different views of an object afforded by binocular vision (disparity or self-motion (motion parallax. However, such visual characteristics can also be obtained under controlled monocular viewing of pictures. One explanation for the impression of monocular stereopsis is based on the notion of cue-coherence/conflict (eg, Ames, 1925. When a picture is viewed with both eyes, binocular cues specify the flat picture surface and are in conflict with the 3-dimentionality implied by the pictorial cues. The elimination of these conflicting cues under monocular viewing putatively causes the enhancement of pictorial depth impression. The cue-coherence/conflict explanation also predicts a greater magnitude of perceived depth relief accompanying the greater impression of stereopsis. I will present an alternative theory that stereopsis is the conscious perception of the precision of the brains estimate of absolute (egocentrically scaled depth. Both qualitative and quantitative empirical results are consistent with this theory. Specifically, they show that (i the same qualitative characteristics of depth impression are reported under binocular viewing of real scenes, stereoscopic images, and controlled monocular viewing of pictures; (ii the impression of stereopsis is measurable and its variation, under different viewing conditions is not consistent with a cue-conflict account; (iii stereopsis can be elicited by manipulating egocentric distance cues when viewing pictures, without altering conflicting binocular cues; and (iv under conditions that elicit

  14. INTEGRATION OF VIDEO IMAGES AND CAD WIREFRAMES FOR 3D OBJECT LOCALIZATION

    Directory of Open Access Journals (Sweden)

    R. A. Persad

    2012-07-01

    Full Text Available The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT. Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  15. Evaluation of web-based annotation of ophthalmic images for multicentric clinical trials.

    Science.gov (United States)

    Chalam, K V; Jain, P; Shah, V A; Shah, Gaurav Y

    2006-06-01

    An Internet browser-based annotation system can be used to identify and describe features in digitalized retinal images, in multicentric clinical trials, in real time. In this web-based annotation system, the user employs a mouse to draw and create annotations on a transparent layer, that encapsulates the observations and interpretations of a specific image. Multiple annotation layers may be overlaid on a single image. These layers may correspond to annotations by different users on the same image or annotations of a temporal sequence of images of a disease process, over a period of time. In addition, geometrical properties of annotated figures may be computed and measured. The annotations are stored in a central repository database on a server, which can be retrieved by multiple users in real time. This system facilitates objective evaluation of digital images and comparison of double-blind readings of digital photographs, with an identifiable audit trail. Annotation of ophthalmic images allowed clinically feasible and useful interpretation to track properties of an area of fundus pathology. This provided an objective method to monitor properties of pathologies over time, an essential component of multicentric clinical trials. The annotation system also allowed users to view stereoscopic images that are stereo pairs. This web-based annotation system is useful and valuable in monitoring patient care, in multicentric clinical trials, telemedicine, teaching and routine clinical settings.

  16. Laboratory observations of sediment transport using combined particle image and tracking velocimetry (Conference Presentation)

    Science.gov (United States)

    Frank, Donya; Calantoni, Joseph

    2017-05-01

    Improved understanding of coastal hydrodynamics and morphology will lead to more effective mitigation measures that reduce fatalities and property damage caused by natural disasters such as hurricanes. We investigated sediment transport under oscillatory flow over flat and rippled beds with phase-separated stereoscopic Particle Image Velocimetry (PIV). Standard PIV techniques severely limit measurements at the fluid-sediment interface and do not allow for the observation of separate phases in multi-phase flow (e.g. sand grains in water). We have implemented phase-separated Particle Image Velocimetry by adding fluorescent tracer particles to the fluid in order to observe fluid flow and sediment transport simultaneously. While sand grains scatter 532 nm wavelength laser light, the fluorescent particles absorb 532 nm laser light and re-emit light at a wavelength of 584 nm. Optical long-pass filters with a cut-on wavelength of 550 nm were installed on two cameras configured to perform stereoscopic PIV to capture only the light emitted by the fluorescent tracer particles. A third high-speed camera was used to capture the light scattered by the sand grains allowing for sediment particle tracking via particle tracking velocimetry (PTV). Together, these overlapping, simultaneously recorded images provided sediment particle and fluid velocities at high temporal and spatial resolution (100 Hz sampling with 0.8 mm vector spacing for the 2D-3C fluid velocity field). Measurements were made under a wide range of oscillatory flows over flat and rippled sand beds. The set of observations allow for the investigation of the relative importance of pressure gradients and shear stresses on sediment transport.

  17. Image Gallery

    Science.gov (United States)

    ... R S T U V W X Y Z Image Gallery Share: The Image Gallery contains high-quality digital photographs available from ... Select a category below to view additional thumbnail images. Images are available for direct download in 2 ...

  18. The preparation of Drosophila embryos for live-imaging using the hanging drop protocol.

    Science.gov (United States)

    Reed, Bruce H; McMillan, Stephanie C; Chaudhary, Roopali

    2009-03-13

    Green fluorescent protein (GFP)-based timelapse live-imaging is a powerful technique for studying the genetic regulation of dynamic processes such as tissue morphogenesis, cell-cell adhesion, or cell death. Drosophila embryos expressing GFP are readily imaged using either stereoscopic or confocal microscopy. A goal of any live-imaging protocol is to minimize detrimental effects such as dehydration and hypoxia. Previous protocols for preparing Drosophila embryos for live-imaging analysis have involved placing dechorionated embryos in halocarbon oil and sandwiching them between a halocarbon gas-permeable membrane and a coverslip. The introduction of compression through mounting embryos in this manner represents an undesirable complication for any biomechanical-based analysis of morphogenesis. Our method, which we call the hanging drop protocol, results in excellent viability of embryos during live imaging and does not require that embryos be compressed. Briefly, the hanging drop protocol involves the placement of embryos in a drop of halocarbon oil that is suspended from a coverslip, which is, in turn, fixed in position over a humid chamber. In addition to providing gas exchange and preventing dehydration, this arrangement takes advantage of the buoyancy of embryos in halocarbon oil to prevent them from drifting out of position during timelapse acquisition. This video describes in detail how to collect and prepare Drosophila embryos for live imaging using the hanging drop protocol. This protocol is suitable for imaging dechorionated embryos using stereomicroscopy or any upright compound fluorescence microscope.

  19. Design and Development of a New Multi-Projection X-Ray System for Chest Imaging

    Science.gov (United States)

    Chawla, Amarpreet S.; Boyce, Sarah; Washington, Lacey; McAdams, H. Page; Samei, Ehsan

    2009-02-01

    Overlapping anatomical structures may confound the detection of abnormal pathology, including lung nodules, in conventional single-projection chest radiography. To minimize this fundamental limiting factor, a dedicated digital multi-projection system for chest imaging was recently developed at the Radiology Department of Duke University. We are reporting the design of the multi-projection imaging system and its initial performance in an ongoing clinical trial. The system is capable of acquiring multiple full-field projections of the same patient along both the horizontal and vertical axes at variable speeds and acquisition frame rates. These images acquired in rapid succession from slightly different angles about the posterior-anterior (PA) orientation can be correlated to minimize the influence of overlying anatomy. The developed system has been tested for repeatability and motion blur artifacts to investigate its robustness for clinical trials. Excellent geometrical consistency was found in the tube motion, with positional errors for clinical settings within 1%. The effect of tube-motion on the image quality measured in terms of impact on the modulation transfer function (MTF) was found to be minimal. The system was deemed clinic-ready and a clinical trial was subsequently launched. The flexibility of image acquisition built into the system provides a unique opportunity to easily modify it for different clinical applications, including tomosynthesis, correlation imaging (CI), and stereoscopic imaging.

  20. Spectral domain optical coherence tomography cross-sectional image of optic nerve head during intraocular pressure elevation

    Directory of Open Access Journals (Sweden)

    Ji Young Lee

    2014-12-01

    Full Text Available AIM: To analyze changes of the optic nerve head (ONH and peripapillary region during intraocular pressure (IOP elevation in patients using spectral domain optical coherence tomography (SD-OCT.METHODS: Both an optic disc 200×200 cube scan and a high-definition 5-line raster scan were obtained from open angle glaucoma patients presented with monocular elevation of IOP (≥30 mm Hg using SD-OCT. Additional baseline characteristics included age, gender, diagnosis, best-corrected visual acuity, refractive error, findings of slit lamp biomicroscopy, findings of dilated stereoscopic examination of the ONH and fundus, IOP, pachymetry findings, and the results of visual field.RESULTS: The 24 patients were selected and divided into two groups:group 1 patients had no history of IOP elevation or glaucoma (n=14, and group 2 patients did have history of IOP elevation or glaucoma (n=10. In each patient, the study eye with elevated IOP was classified into group H (high, and the fellow eye was classified into group L (low. The mean deviation (MD differed significantly between groups H and L when all eyes were considered (P=0.047 and in group 2 (P=0.042, not in group 1 (P=0.893. Retinal nerve fiber layer (RNFL average thickness (P=0.050, rim area (P=0.015, vertical cup/disc ratio (P=0.011, cup volume (P=0.028, inferior quadrant RNFL thickness (P=0.017, and clock-hour (1, 5, and 6 RNFL thicknesses (P=0.050, 0.012, and 0.018, respectively, cup depth (P=0.008, central prelaminar layer thickness (P=0.023, mid-inferior prelaminar layer thickness (P=0.023, and nasal retinal slope (P=0.034 were significantly different between the eyes with groups H and L.CONCLUSION:RNFL average thickness, rim area, vertical cup/disc ratio, cup volume, inferior quadrant RNFL thickness, and clock-hour (1, 5, and 6 RNFL thicknesses significantly changed during acute IOP elevation.

  1. Three-dimensional tomosynthetic image restoration for brachytherapy source localization

    International Nuclear Information System (INIS)

    Persons, Timothy M.

    2001-01-01

    Tomosynthetic image reconstruction allows for the production of a virtually infinite number of slices from a finite number of projection views of a subject. If the reconstructed image volume is viewed in toto, and the three-dimensional (3D) impulse response is accurately known, then it is possible to solve the inverse problem (deconvolution) using canonical image restoration methods (such as Wiener filtering or solution by conjugate gradient least squares iteration) by extension to three dimensions in either the spatial or the frequency domains. This dissertation presents modified direct and iterative restoration methods for solving the inverse tomosynthetic imaging problem in 3D. The significant blur artifact that is common to tomosynthetic reconstructions is deconvolved by solving for the entire 3D image at once. The 3D impulse response is computed analytically using a fiducial reference schema as realized in a robust, self-calibrating solution to generalized tomosynthesis. 3D modulation transfer function analysis is used to characterize the tomosynthetic resolution of the 3D reconstructions. The relevant clinical application of these methods is 3D imaging for brachytherapy source localization. Conventional localization schemes for brachytherapy implants using orthogonal or stereoscopic projection radiographs suffer from scaling distortions and poor visibility of implanted seeds, resulting in compromised source tracking (reported errors: 2-4 mm) and dosimetric inaccuracy. 3D image reconstruction (using a well-chosen projection sampling scheme) and restoration of a prostate brachytherapy phantom is used for testing. The approaches presented in this work localize source centroids with submillimeter error in two Cartesian dimensions and just over one millimeter error in the third

  2. Morphometric Optic Nerve Head Analysis in Glaucoma Patients: A Comparison between the Simultaneous Nonmydriatic Stereoscopic Fundus Camera (Kowa Nonmyd WX3D and the Heidelberg Scanning Laser Ophthalmoscope (HRT III

    Directory of Open Access Journals (Sweden)

    Siegfried Mariacher

    2016-01-01

    Full Text Available Purpose. To investigate the agreement between morphometric optic nerve head parameters assessed with the confocal laser ophthalmoscope HRT III and the stereoscopic fundus camera Kowa nonmyd WX3D retrospectively. Methods. Morphometric optic nerve head parameters of 40 eyes of 40 patients with primary open angle glaucoma were analyzed regarding their vertical cup-to-disc-ratio (CDR. Vertical CDR, disc area, cup volume, rim volume, and maximum cup depth were assessed with both devices by one examiner. Mean bias and limits of agreement (95% CI were obtained using scatter plots and Bland-Altman analysis. Results. Overall vertical CDR comparison between HRT III and Kowa nonmyd WX3D measurements showed a mean difference (limits of agreement of −0.06 (−0.36 to 0.24. For the CDR < 0.5 group (n=24 mean difference in vertical CDR was −0.14 (−0.34 to 0.06 and for the CDR ≥ 0.5 group (n=16 0.06 (−0.21 to 0.34. Conclusion. This study showed a good agreement between Kowa nonmyd WX3D and HRT III with regard to widely used optic nerve head parameters in patients with glaucomatous optic neuropathy. However, data from Kowa nonmyd WX3D exhibited the tendency to measure larger CDR values than HRT III in the group with CDR < 0.5 group and lower CDR values in the group with CDR ≥ 0.5.

  3. Three-dimensional reconstruction of breast implants based on isocentric stereoscopic X-ray pictures (ISXP) for application monitoring and irradiation planning of a remote-controlled interstitial afterloading method

    Energy Technology Data Exchange (ETDEWEB)

    Loeffler, E.; Sauer, O.

    1988-01-01

    An individual irradiation planning and application monitoring by ISXP is presented for a remote-controlled interstitial afterloading technique using /sup 192/Ir wires which is applied in breast-preserving radiotherapy. The errors of reconstruction of the implants are discussed. The consideration of errors for ISXP can be extended to other stereoscopic methods. In this case the quality considerations made by other authors have to be enlarged. The maximum reconstruction error was investigated for a given digitalization precision, focus size, and object blur by patient's movements in dependence on the deviation angle. The optimum deviation angle is about 45/sup 0/, depending on the importance given to the individual parts and almost without being influenced by the relation between the distance isocenter-film and the distance focus-isocenter. In case of an optimized deviation angle, a displacement of an implant point of 1 mm leads to a maximum reconstruction error of 2 mm. The dosage is made according to the Paris system. If the circumcircle radius of the application triangle is modified by 1 mm, a dosage modification of 14% will be the consequence in case of very short wires and a small side length. A verification in a phantom showed a positioning error below 0.5 mm. The dosage error is 2% due to the mutual compensation of the direction-isotropic reconstruction errors of the needles the number of which is between seven and nine.

  4. The use of image analysis for the study of interfacial bonding in solid composite propellant

    Directory of Open Access Journals (Sweden)

    JASMINA DOSTANIC

    2007-10-01

    Full Text Available In the framework of this research, the program Image Pro Plus was applied for determining the polymer–oxidizer interactions in HTPB-based composite propellants. In order to improve the interactions, different bonding agents were used, and their efficiency was analyzed. The determination of the quantity, area and radius of non-bonded oxidizer crystals is presented. The position of formed cracks in the specimen and their area has a great influence on the mechanical properties of composite propellant. The preparation of the composite propellant in order to enable the photographing of their structure by means of stereoscopic and metallographic microscopes with the digital camera is also described as well.

  5. Virtual X-ray imaging techniques in an immersive casting simulation environment

    International Nuclear Information System (INIS)

    Li, Ning; Kim, Sung-Hee; Suh, Ji-Hyun; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee

    2007-01-01

    A computer code was developed to simulate radiograph of complex casting products in a CAVE TM -like environment. The simulation is based on the deterministic algorithms and ray tracing techniques. The aim of this study is to examine CAD/CAE/CAM models at the design stage, to optimize the design and inspect predicted defective regions with fast speed, good accuracy and small numerical expense. The present work discusses the algorithms for the radiography simulation of CAD/CAM model and proposes algorithmic solutions adapted from ray-box intersection algorithm and octree data structure specifically for radiographic simulation of CAE model. The stereoscopic visualization of full-size of product in the immersive casting simulation environment as well as the virtual X-ray images of castings provides an effective tool for design and evaluation of foundry processes by engineers and metallurgists

  6. Particle image and acoustic Doppler velocimetry analysis of a cross-flow turbine wake

    Science.gov (United States)

    Strom, Benjamin; Brunton, Steven; Polagye, Brian

    2017-11-01

    Cross-flow turbines have advantageous properties for converting kinetic energy in wind and water currents to rotational mechanical energy and subsequently electrical power. A thorough understanding of cross-flow turbine wakes aids understanding of rotor flow physics, assists geometric array design, and informs control strategies for individual turbines in arrays. In this work, the wake physics of a scale model cross-flow turbine are investigated experimentally. Three-component velocity measurements are taken downstream of a two-bladed turbine in a recirculating water channel. Time-resolved stereoscopic particle image and acoustic Doppler velocimetry are compared for planes normal to and distributed along the turbine rotational axis. Wake features are described using proper orthogonal decomposition, dynamic mode decomposition, and the finite-time Lyapunov exponent. Consequences for downstream turbine placement are discussed in conjunction with two-turbine array experiments.

  7. Automated materials discrimination using 3D dual energy X ray images

    International Nuclear Information System (INIS)

    Wang, Ta Wee

    2002-01-01

    The ability of a human observer to identify an explosive device concealed in complex arrangements of objects routinely encountered in the 2D x-ray screening of passenger baggage at airports is often problematic. Standard dual-energy x-ray techniques enable colour encoding of the resultant images in terms of organic, inorganic and metal substances. This transmission imaging technique produces colour information computed from a high-energy x-ray signal and a low energy x-ray signal (80keV eff ≤ 13) to be automatically discriminated from many layers of overlapping substances. This is achieved by applying a basis materials subtraction technique to the data provided by a wavelet image segmentation algorithm. This imaging technique is reliant upon the image data for the masking substances to be discriminated independently of the target material. Further work investigated the extraction of depth data from stereoscopic images to estimate the mass density of the target material. A binocular stereoscopic dual-energy x-ray machine previously developed by the Vision Systems Group at The Nottingham Trent University in collaboration with The Home Office Science and Technology Group provided the image data for the empirical investigation. This machine utilises a novel linear castellated dual-energy x-ray detector recently developed by the Vision Systems Group. This detector array employs half the number of scintillator-photodiode sensors in comparison to a conventional linear dual-energy sensor. The castellated sensor required the development of an image enhancement algorithm to remove the spatial interlace effect in the resultant images prior to the calibration of the system for materials discrimination. To automate the basis materials subtraction technique a wavelet image segmentation and classification algorithm was developed. This enabled overlapping image structures in the x-rayed baggage to be partitioned. A series of experiments was conducted to investigate the

  8. Three-dimensional views of the nucleus of Comet 67P/Churyumov-Gerasimenko: an atlas of stereo anaglyphs from OSIRIS-NAC images

    Science.gov (United States)

    Lamy, Philippe L.; Romeuf, David; Faury, Guillaume; Durand, Joelle; Beigbeder, Laurent; Groussin, Olivier

    2017-10-01

    The Narrow Angle Camera (NAC) of the OSIRIS imaging system aboard ESA’s Rosetta spacecraft has acquired approximately 25000 images of the surface of the nucleus of comet 67P/Churyumov-Gerasimenko at various spatial scales down to centimeters per pixel. The bulk of these images have been obtained in sequences and the combined displacement of the Rosetta orbiter along its trajectory and the rotation of the nucleus allow associating many pairs of images appropriate to stereoscopic viewing. This is achieved by constructing anaglyphs after rotating the images so that the relative shift appears horizontal. The shift is set to limit the parallax to approximately 2° (with a maximum value of 4°) for the foreground (to avoid image deformation) and the scene is placed behind the screen for optimal visual comfort. The rotation of the nucleus may have the adverse effect of introducing temporal incoherence, prominently from the variation of the cast shadows. Various solutions are implemented to circumvent this problem, usually by cropping the maximum extent of the shadows. A time of writing, approximately 900 anaglyphs have been produced and we expect to reach several thousand once the systematic search of suitable pairs will be completed. We will present examples of anaglyphs. They will be searchable thanks to a dedicated data base that will document each one including its location on a 3D numerical model of the nucleus. Many possibilities of querying the parameters will be offered. It is anticipated that this atlas available online in the near future will be a valuable tool for fostering our understanding of the complex morphology of the cometary surface and of the processes at work , as well as offering spectacular stereoscopic views of the nucleus enjoyable by a general public.

  9. Unilateral blindness with third cranial nerve palsy and abnormal enhancement of extraocular muscles on magnetic resonance imaging of orbit after the ingestion of methanol.

    Science.gov (United States)

    Chung, Tae Nyoung; Kim, Sun Wook; Park, Yoo Seok; Park, Incheol

    2010-05-01

    Methanol is generally known to cause visual impairment and various systemic manifestations. There are a few reported specific findings for methanol intoxication on magnetic resonance imaging (MRI) of the brain. A case is reported of unilateral blindness with third cranial nerve palsy oculus sinister (OS) after the ingestion of methanol. Unilateral damage of the retina and optic nerve were confirmed by fundoscopy, flourescein angiography, visual evoked potential and electroretinogram. The optic nerve and extraocular muscles (superior rectus, medial rectus, inferior rectus and inferior oblique muscle) were enhanced by gadolinium-DTPA on MRI of the orbit. This is the first case report of permanent monocular blindness with confirmed unilateral damage of the retina and optic nerve, combined with third cranial nerve palsy after methanol ingestion.

  10. Simulated multipolarized MAPSAR images to distinguish agricultural crops

    Directory of Open Access Journals (Sweden)

    Wagner Fernando Silva

    2012-06-01

    Full Text Available Many researchers have shown the potential of Synthetic Aperture Radar (SAR images for agricultural applications, particularly for monitoring regions with limitations in terms of acquiring cloud free optical images. Recently, Brazil and Germany began a feasibility study on the construction of an orbital L-band SAR sensor referred to as MAPSAR (Multi-Application Purpose SAR. This sensor provides L-band images in three spatial resolutions and polarimetric, interferometric and stereoscopic capabilities. Thus, studies are needed to evaluate the potential of future MAPSAR images. The objective of this study was to evaluate multipolarized MAPSAR images simulated by the airborne SAR-R99B sensor to distinguish coffee, cotton and pasture fields in Brazil. Discrimination among crops was evaluated through graphical and cluster analysis of mean backscatter values, considering single, dual and triple polarizations. Planting row direction of coffee influenced the backscatter and was divided into two classes: parallel and perpendicular to the sensor look direction. Single polarizations had poor ability to discriminate the crops. The overall accuracies were less than 59 %, but the understanding of the microwave interaction with the crops could be explored. Combinations of two polarizations could differentiate various fields of crops, highlighting the combination VV-HV that reached 78 % overall accuracy. The use of three polarizations resulted in 85.4 % overall accuracy, indicating that the classes pasture and parallel coffee were fully discriminated from the other classes. These results confirmed the potential of multipolarized MAPSAR images to distinguish the studied crops and showed considerable improvement in the accuracy of the results when the number of polarizations was increased.

  11. The Computer Image Generation Applications Study.

    Science.gov (United States)

    1980-07-01

    1059 7 T62 Tank 759 0 Lexington Carrier 1485 19 Sea Scape 600 1680 Fresnel Lens Optical Landing System (FLOLS) 20 0 Meatball 9 0 T37 Aircraft (LOD#3... Meatball T37 Aircraft NATO 4655 1914 33 new eye point. See also 7.1.5.5 for definition of monocular movement parallax. (g) Multiple Simulations

  12. Image Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, Laura Jean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-08

    In large datasets, it is time consuming or even impossible to pick out interesting images. Our proposed solution is to find statistics to quantify the information in each image and use those to identify and pick out images of interest.

  13. Image Guidance

    Science.gov (United States)

    Guidance that explains the process for getting images approved in One EPA Web microsites and resource directories. includes an appendix that shows examples of what makes some images better than others, how some images convey meaning more than others

  14. Data imaging

    International Nuclear Information System (INIS)

    Pepy, G.

    1999-01-01

    After an introduction about data imaging in general, the principles of imaging data collected via neutron scattering experiments are presented. Some computer programs designed for data imaging purposes are reviewed. (K.A.)

  15. Pancreatic imaging

    International Nuclear Information System (INIS)

    Potsaid, M.S.

    1978-01-01

    The clinical use of [ 75 Se] selenomethionine for visualising the pancreas is described. The physiological considerations, imaging procedure, image interpretations and reliability are considered. (C.F.)

  16. Cortical dynamics of figure-ground separation in response to 2D pictures and 3D scenes:How V2 combines border ownership, stereoscopic cues, and Gestalt grouping rules

    Directory of Open Access Journals (Sweden)

    Stephen eGrossberg

    2016-01-01

    Full Text Available The FACADE model, and its laminar cortical realization and extension in the 3D LAMINART model, have explained, simulated, and predicted many perceptual and neurobiological data about how the visual cortex carries out 3D vision and figure-ground perception, and how these cortical mechanisms enable 2D pictures to generate 3D percepts of occluding and occluded objects. In particular, these models have proposed how border ownership occurs, but have not yet explicitly explained the correlation between multiple properties of border ownership neurons in cortical area V2 that were reported in a remarkable series of neurophysiological experiments by von der Heydt and his colleagues; namely, border ownership, contrast preference, binocular stereoscopic information, selectivity for side-of-figure, Gestalt rules, and strength of attentional modulation, as well as the time course during which such properties arise. This article shows how, by combining 3D LAMINART properties that were discovered in two parallel streams of research, a unified explanation of these properties emerges. This explanation proposes, moreover, how these properties contribute to the generation of consciously seen 3D surfaces. The first research stream models how processes like 3D boundary grouping and surface filling-in interact in multiple stages within and between the V1 interblob – V2 interstripe – V4 cortical stream and the V1 blob – V2 thin stripe – V4 cortical stream, respectively. Of particular importance for understanding figure-ground separation is how these cortical interactions convert computationally complementary boundary and surface mechanisms into a consistent conscious percept, including the critical use of surface contour feedback signals from surface representations in V2 thin stripes to boundary representations in V2 interstripes. Remarkably, key figure-ground properties emerge from these feedback interactions. The second research stream shows how cells that

  17. D3D augmented reality imaging system: proof of concept in mammography

    Directory of Open Access Journals (Sweden)

    Douglas DB

    2016-08-01

    Full Text Available David B Douglas,1 Emanuel F Petricoin,2 Lance Liotta,2 Eugene Wilson3 1Department of Radiology, Stanford University, Palo Alto, CA, 2Center for Applied Proteomics and Molecular Medicine, George Mason University, Manassas, VA, 3Department of Radiology, Fort Benning, Columbus, GA, USA Purpose: The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D augmented reality”. Materials and methods: A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results: The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion: The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. Keywords: augmented reality, 3D medical imaging, radiology, depth perception

  18. Image city

    DEFF Research Database (Denmark)

    2003-01-01

    Image city exhibition explores a condition of mediation, through a focus on image and sound narratives with a point of departure on a number of Asian cities.......Image city exhibition explores a condition of mediation, through a focus on image and sound narratives with a point of departure on a number of Asian cities....

  19. Automated 3D-Objectdocumentation on the Base of an Image Set

    Directory of Open Access Journals (Sweden)

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  20. Using Stereo Vision to Support the Automated Analysis of Surveillance Videos

    Science.gov (United States)

    Menze, M.; Muhle, D.

    2012-07-01

    Video surveillance systems are no longer a collection of independent cameras, manually controlled by human operators. Instead, smart sensor networks are developed, able to fulfil certain tasks on their own and thus supporting security personnel by automated analyses. One well-known task is the derivation of people's positions on a given ground plane from monocular video footage. An improved accuracy for the ground position as well as a more detailed representation of single salient people can be expected from a stereoscopic processing of overlapping views. Related work mostly relies on dedicated stereo devices or camera pairs with a small baseline. While this set-up is helpful for the essential step of image matching, the high accuracy potential of a wide baseline and the according good intersection geometry is not utilised. In this paper we present a stereoscopic approach, working on overlapping views of standard pan-tilt-zoom cameras which can easily be generated for arbitrary points of interest by an appropriate reconfiguration of parts of a sensor network. Experiments are conducted on realistic surveillance footage to show the potential of the suggested approach and to investigate the influence of different baselines on the quality of the derived surface model. Promising estimations of people's position and height are retrieved. Although standard matching approaches show helpful results, future work will incorporate temporal dependencies available from image sequences in order to reduce computational effort and improve the derived level of detail.