WorldWideScience

Sample records for monocular vision-based navigation

  1. Monocular Vision SLAM for Indoor Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Koray Çelik

    2013-01-01

    Full Text Available This paper presents a novel indoor navigation and ranging strategy via monocular camera. By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM. The navigation strategy assumes an indoor or indoor-like manmade environment whose layout is previously unknown, GPS-denied, representable via energy based feature points, and straight architectural lines. We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV with sophisticated on-board image processing and SLAM capabilities. Building and enabling such a small aerial vehicle to fly in tight corridors is a significant technological challenge, especially in the absence of GPS signals and with limited sensing options. Experimental results show that the system is only limited by the capabilities of the camera and environmental entropy.

  2. A Behaviour-Based Architecture for Mapless Navigation Using Vision

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Guzel

    2012-04-01

    Full Text Available Autonomous robots operating in an unknown and uncertain environment must be able to cope with dynamic changes to that environment. For a mobile robot in a cluttered environment to navigate successfully to a goal while avoiding obstacles is a challenging problem. This paper presents a new behaviour-based architecture design for mapless navigation. The architecture is composed of several modules and each module generates behaviours. A novel method, inspired from a visual homing strategy, is adapted to a monocular vision-based system to overcome goal-based navigation problems. A neural network-based obstacle avoidance strategy is designed using a 2-D scanning laser. To evaluate the performance of the proposed architecture, the system has been tested using Microsoft Robotics Studio (MRS, which is a very powerful 3D simulation environment. In addition, real experiments to guide a Pioneer 3-DX mobile robot, equipped with a pan-tilt-zoom camera in a cluttered environment are presented. The analysis of the results allows us to validate the proposed behaviour-based navigation strategy.

  3. Vision Based Navigation for Autonomous Cooperative Docking of CubeSats

    Science.gov (United States)

    Pirat, Camille; Ankersen, Finn; Walker, Roger; Gass, Volker

    2018-05-01

    A realistic rendezvous and docking navigation solution applicable to CubeSats is investigated. The scalability analysis of the ESA Autonomous Transfer Vehicle Guidance, Navigation & Control (GNC) performances and the Russian docking system, shows that the docking of two CubeSats would require a lateral control performance of the order of 1 cm. Line of sight constraints and multipath effects affecting Global Navigation Satellite System (GNSS) measurements in close proximity prevent the use of this sensor for the final approach. This consideration and the high control accuracy requirement led to the use of vision sensors for the final 10 m of the rendezvous and docking sequence. A single monocular camera on the chaser satellite and various sets of Light-Emitting Diodes (LEDs) on the target vehicle ensure the observability of the system throughout the approach trajectory. The simple and novel formulation of the measurement equations allows differentiating unambiguously rotations from translations between the target and chaser docking port and allows a navigation performance better than 1 mm at docking. Furthermore, the non-linear measurement equations can be solved in order to provide an analytic navigation solution. This solution can be used to monitor the navigation filter solution and ensure its stability, adding an extra layer of robustness for autonomous rendezvous and docking. The navigation filter initialization is addressed in detail. The proposed method is able to differentiate LEDs signals from Sun reflections as demonstrated by experimental data. The navigation filter uses a comprehensive linearised coupled rotation/translation dynamics, describing the chaser to target docking port motion. The handover, between GNSS and vision sensor measurements, is assessed. The performances of the navigation function along the approach trajectory is discussed.

  4. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    Science.gov (United States)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  5. Vision Based Autonomous Robot Navigation Algorithms and Implementations

    CERN Document Server

    Chatterjee, Amitava; Nirmal Singh, N

    2013-01-01

    This book is devoted to the theory and development of autonomous navigation of mobile robots using computer vision based sensing mechanism. The conventional robot navigation systems, utilizing traditional sensors like ultrasonic, IR, GPS, laser sensors etc., suffer several drawbacks related to either the physical limitations of the sensor or incur high cost. Vision sensing has emerged as a popular alternative where cameras can be used to reduce the overall cost, maintaining high degree of intelligence, flexibility and robustness. This book includes a detailed description of several new approaches for real life vision based autonomous navigation algorithms and SLAM. It presents the concept of how subgoal based goal-driven navigation can be carried out using vision sensing. The development concept of vision based robots for path/line tracking using fuzzy logic is presented, as well as how a low-cost robot can be indigenously developed in the laboratory with microcontroller based sensor systems. The book descri...

  6. Relating binocular and monocular vision in strabismic and anisometropic amblyopia.

    Science.gov (United States)

    Agrawal, Ritwick; Conner, Ian P; Odom, J V; Schwartz, Terry L; Mendola, Janine D

    2006-06-01

    To examine deficits in monocular and binocular vision in adults with amblyopia and to test the following 2 hypotheses: (1) Regardless of clinical subtype, the degree of impairment in binocular integration predicts the pattern of monocular acuity deficits. (2) Subjects who lack binocular integration exhibit the most severe interocular suppression. Seven subjects with anisometropia, 6 subjects with strabismus, and 7 control subjects were tested. Monocular tests included Snellen acuity, grating acuity, Vernier acuity, and contrast sensitivity. Binocular tests included Titmus stereo test, binocular motion integration, and dichoptic contrast masking. As expected, both groups showed deficits in monocular acuity, with subjects with strabismus showing greater deficits in Vernier acuity. Both amblyopic groups were then characterized according to the degree of residual stereoacuity and binocular motion integration ability, and 67% of subjects with strabismus compared with 29% of subjects with anisometropia were classified as having "nonbinocular" vision according to our criterion. For this nonbinocular group, Vernier acuity is most impaired. In addition, the nonbinocular group showed the most dichoptic contrast masking of the amblyopic eye and the least dichoptic contrast masking of the fellow eye. The degree of residual binocularity and interocular suppression predicts monocular acuity and may be a significant etiological mechanism of vision loss.

  7. Robot Navigation Control Based on Monocular Images: An Image Processing Algorithm for Obstacle Avoidance Decisions

    Directory of Open Access Journals (Sweden)

    William Benn

    2012-01-01

    Full Text Available This paper covers the use of monocular vision to control autonomous navigation for a robot in a dynamically changing environment. The solution focused on using colour segmentation against a selected floor plane to distinctly separate obstacles from traversable space: this is then supplemented with canny edge detection to separate similarly coloured boundaries to the floor plane. The resulting binary map (where white identifies an obstacle-free area and black identifies an obstacle could then be processed by fuzzy logic or neural networks to control the robot’s next movements. Findings show that the algorithm performed strongly on solid coloured carpets, wooden, and concrete floors but had difficulty in separating colours in multicoloured floor types such as patterned carpets.

  8. Research on robot navigation vision sensor based on grating projection stereo vision

    Science.gov (United States)

    Zhang, Xiaoling; Luo, Yinsheng; Lin, Yuchi; Zhu, Lei

    2016-10-01

    A novel visual navigation method based on grating projection stereo vision for mobile robot in dark environment is proposed. This method is combining with grating projection profilometry of plane structured light and stereo vision technology. It can be employed to realize obstacle detection, SLAM (Simultaneous Localization and Mapping) and vision odometry for mobile robot navigation in dark environment without the image match in stereo vision technology and without phase unwrapping in the grating projection profilometry. First, we research the new vision sensor theoretical, and build geometric and mathematical model of the grating projection stereo vision system. Second, the computational method of 3D coordinates of space obstacle in the robot's visual field is studied, and then the obstacles in the field is located accurately. The result of simulation experiment and analysis shows that this research is useful to break the current autonomous navigation problem of mobile robot in dark environment, and to provide the theoretical basis and exploration direction for further study on navigation of space exploring robot in the dark and without GPS environment.

  9. Disseminated neurocysticercosis presenting as isolated acute monocular painless vision loss

    Directory of Open Access Journals (Sweden)

    Gaurav M Kasundra

    2014-01-01

    Full Text Available Neurocysticercosis, the most common parasitic infection of the nervous system, is known to affect the brain, eyes, muscular tissues and subcutaneous tissues. However, it is very rare for patients with ocular cysts to have concomitant cerebral cysts. Also, the dominant clinical manifestation of patients with cerebral cysts is either seizures or headache. We report a patient who presented with acute monocular painless vision loss due to intraocular submacular cysticercosis, who on investigation had multiple cerebral parenchymal cysticercal cysts, but never had any seizures. Although such a vision loss after initiation of antiparasitic treatment has been mentioned previously, acute monocular vision loss as the presenting feature of ocular cysticercosis is rare. We present a brief review of literature along with this case report.

  10. Distance and velocity estimation using optical flow from a monocular camera

    NARCIS (Netherlands)

    Ho, H.W.; de Croon, G.C.H.E.; Chu, Q.

    2016-01-01

    Monocular vision is increasingly used in Micro Air Vehicles for navigation. In particular, optical flow, inspired by flying insects, is used to perceive vehicles’ movement with respect to the surroundings or sense changes in the environment. However, optical flow does not directly provide us the

  11. Distance and velocity estimation using optical flow from a monocular camera

    NARCIS (Netherlands)

    Ho, H.W.; de Croon, G.C.H.E.; Chu, Q.

    2017-01-01

    Monocular vision is increasingly used in micro air vehicles for navigation. In particular, optical flow, inspired by flying insects, is used to perceive vehicle movement with respect to the surroundings or sense changes in the environment. However, optical flow does not directly provide us the

  12. Optimization of dynamic envelope measurement system for high speed train based on monocular vision

    Science.gov (United States)

    Wu, Bin; Liu, Changjie; Fu, Luhua; Wang, Zhong

    2018-01-01

    The definition of dynamic envelope curve is the maximum limit outline caused by various adverse effects during the running process of the train. It is an important base of making railway boundaries. At present, the measurement work of dynamic envelope curve of high-speed vehicle is mainly achieved by the way of binocular vision. There are some problems of the present measuring system like poor portability, complicated process and high cost. A new measurement system based on the monocular vision measurement theory and the analysis on the test environment is designed and the measurement system parameters, the calibration of camera with wide field of view, the calibration of the laser plane are designed and optimized in this paper. The accuracy has been verified to be up to 2mm by repeated tests and experimental data analysis. The feasibility and the adaptability of the measurement system is validated. There are some advantages of the system like lower cost, a simpler measurement and data processing process, more reliable data. And the system needs no matching algorithm.

  13. Vision/INS Integrated Navigation System for Poor Vision Navigation Environments

    Directory of Open Access Journals (Sweden)

    Youngsun Kim

    2016-10-01

    Full Text Available In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient.

  14. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    OpenAIRE

    Kia, Chua; Arshad, Mohd Rizal

    2006-01-01

    This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs) operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system ...

  15. Monocular Vision-Based Robot Localization and Target Tracking

    Directory of Open Access Journals (Sweden)

    Bing-Fei Wu

    2011-01-01

    Full Text Available This paper presents a vision-based technology for localizing targets in 3D environment. It is achieved by the combination of different types of sensors including optical wheel encoders, an electrical compass, and visual observations with a single camera. Based on the robot motion model and image sequences, extended Kalman filter is applied to estimate target locations and the robot pose simultaneously. The proposed localization system is applicable in practice because it is not necessary to have the initializing setting regarding starting the system from artificial landmarks of known size. The technique is especially suitable for navigation and target tracing for an indoor robot and has a high potential extension to surveillance and monitoring for Unmanned Aerial Vehicles with aerial odometry sensors. The experimental results present “cm” level accuracy of the localization of the targets in indoor environment under a high-speed robot movement.

  16. Detection and Tracking Strategies for Autonomous Aerial Refuelling Tasks Based on Monocular Vision

    Directory of Open Access Journals (Sweden)

    Yingjie Yin

    2014-07-01

    Full Text Available Detection and tracking strategies based on monocular vision are proposed for autonomous aerial refuelling tasks. The drogue attached to the fuel tanker aircraft has two important features. The grey values of the drogue's inner part are different from the external umbrella ribs, as shown in the image. The shape of the drogue's inner dark part is nearly circular. According to crucial prior knowledge, the rough and fine positioning algorithms are designed to detect the drogue. Particle filter based on the drogue's shape is proposed to track the drogue. A strategy to switch between detection and tracking is proposed to improve the robustness of the algorithms. The inner dark part of the drogue is segmented precisely in the detecting and tracking process and the segmented circular part can be used to measure its spatial position. The experimental results show that the proposed method has good performance in real-time and satisfied robustness and positioning accuracy.

  17. A Monocular Vision Measurement System of Three-Degree-of-Freedom Air-Bearing Test-Bed Based on FCCSP

    Science.gov (United States)

    Gao, Zhanyu; Gu, Yingying; Lv, Yaoyu; Xu, Zhenbang; Wu, Qingwen

    2018-06-01

    A monocular vision-based pose measurement system is provided for real-time measurement of a three-degree-of-freedom (3-DOF) air-bearing test-bed. Firstly, a circular plane cooperative target is designed. An image of a target fixed on the test-bed is then acquired. Blob analysis-based image processing is used to detect the object circles on the target. A fast algorithm (FCCSP) based on pixel statistics is proposed to extract the centers of object circles. Finally, pose measurements can be obtained when combined with the centers and the coordinate transformation relation. Experiments show that the proposed method is fast, accurate, and robust enough to satisfy the requirement of the pose measurement.

  18. IMPROVING CAR NAVIGATION WITH A VISION-BASED SYSTEM

    Directory of Open Access Journals (Sweden)

    H. Kim

    2015-08-01

    Full Text Available The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  19. Improving Car Navigation with a Vision-Based System

    Science.gov (United States)

    Kim, H.; Choi, K.; Lee, I.

    2015-08-01

    The real-time acquisition of the accurate positions is very important for the proper operations of driver assistance systems or autonomous vehicles. Since the current systems mostly depend on a GPS and map-matching technique, they show poor and unreliable performance in blockage and weak areas of GPS signals. In this study, we propose a vision oriented car navigation method based on sensor fusion with a GPS and in-vehicle sensors. We employed a single photo resection process to derive the position and attitude of the camera and thus those of the car. This image georeferencing results are combined with other sensory data under the sensor fusion framework for more accurate estimation of the positions using an extended Kalman filter. The proposed system estimated the positions with an accuracy of 15 m although GPS signals are not available at all during the entire test drive of 15 minutes. The proposed vision based system can be effectively utilized for the low-cost but high-accurate and reliable navigation systems required for intelligent or autonomous vehicles.

  20. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems

    Directory of Open Access Journals (Sweden)

    Amedeo Rodi Vetrella

    2016-12-01

    Full Text Available Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS receivers and Micro-Electro-Mechanical Systems (MEMS-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  1. A Hybrid Architecture for Vision-Based Obstacle Avoidance

    Directory of Open Access Journals (Sweden)

    Mehmet Serdar Güzel

    2013-01-01

    Full Text Available This paper proposes a new obstacle avoidance method using a single monocular vision camera as the only sensor which is called as Hybrid Architecture. This architecture integrates a high performance appearance-based obstacle detection method into an optical flow-based navigation system. The hybrid architecture was designed and implemented to run both methods simultaneously and is able to combine the results of each method using a novel arbitration mechanism. The proposed strategy successfully fused two different vision-based obstacle avoidance methods using this arbitration mechanism in order to permit a safer obstacle avoidance system. Accordingly, to establish the adequacy of the design of the obstacle avoidance system, a series of experiments were conducted. The results demonstrate the characteristics of the proposed architecture, and the results prove that its performance is somewhat better than the conventional optical flow-based architecture. Especially, the robot employing Hybrid Architecture avoids lateral obstacles in a more smooth and robust manner than when using the conventional optical flow-based technique.

  2. Autonomous Landing and Ingress of Micro-Air-Vehicles in Urban Environments Based on Monocular Vision

    Science.gov (United States)

    Brockers, Roland; Bouffard, Patrick; Ma, Jeremy; Matthies, Larry; Tomlin, Claire

    2011-01-01

    Unmanned micro air vehicles (MAVs) will play an important role in future reconnaissance and search and rescue applications. In order to conduct persistent surveillance and to conserve energy, MAVs need the ability to land, and they need the ability to enter (ingress) buildings and other structures to conduct reconnaissance. To be safe and practical under a wide range of environmental conditions, landing and ingress maneuvers must be autonomous, using real-time, onboard sensor feedback. To address these key behaviors, we present a novel method for vision-based autonomous MAV landing and ingress using a single camera for two urban scenarios: landing on an elevated surface, representative of a rooftop, and ingress through a rectangular opening, representative of a door or window. Real-world scenarios will not include special navigation markers, so we rely on tracking arbitrary scene features; however, we do currently exploit planarity of the scene. Our vision system uses a planar homography decomposition to detect navigation targets and to produce approach waypoints as inputs to the vehicle control algorithm. Scene perception, planning, and control run onboard in real-time; at present we obtain aircraft position knowledge from an external motion capture system, but we expect to replace this in the near future with a fully self-contained, onboard, vision-aided state estimation algorithm. We demonstrate autonomous vision-based landing and ingress target detection with two different quadrotor MAV platforms. To our knowledge, this is the first demonstration of onboard, vision-based autonomous landing and ingress algorithms that do not use special purpose scene markers to identify the destination.

  3. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2005-09-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  4. Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking and Navigation

    Directory of Open Access Journals (Sweden)

    Chua Kia

    2008-11-01

    Full Text Available This paper presents a robotics vision-based heuristic reasoning system for underwater target tracking and navigation. This system is introduced to improve the level of automation of underwater Remote Operated Vehicles (ROVs operations. A prototype which combines computer vision with an underwater robotics system is successfully designed and developed to perform target tracking and intelligent navigation. This study focuses on developing image processing algorithms and fuzzy inference system for the analysis of the terrain. The vision system developed is capable of interpreting underwater scene by extracting subjective uncertainties of the object of interest. Subjective uncertainties are further processed as multiple inputs of a fuzzy inference system that is capable of making crisp decisions concerning where to navigate. The important part of the image analysis is morphological filtering. The applications focus on binary images with the extension of gray-level concepts. An open-loop fuzzy control system is developed for classifying the traverse of terrain. The great achievement is the system's capability to recognize and perform target tracking of the object of interest (pipeline in perspective view based on perceived condition. The effectiveness of this approach is demonstrated by computer and prototype simulations. This work is originated from the desire to develop robotics vision system with the ability to mimic the human expert's judgement and reasoning when maneuvering ROV in the traverse of the underwater terrain.

  5. Low computation vision-based navigation for a Martian rover

    Science.gov (United States)

    Gavin, Andrew S.; Brooks, Rodney A.

    1994-01-01

    Construction and design details of the Mobot Vision System, a small, self-contained, mobile vision system, are presented. This system uses the view from the top of a small, roving, robotic vehicle to supply data that is processed in real-time to safely navigate the surface of Mars. A simple, low-computation algorithm for constructing a 3-D navigational map of the Martian environment to be used by the rover is discussed.

  6. Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System

    Science.gov (United States)

    2015-03-26

    THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH INERTIAL NAVIGATION SYSTEM THESIS David W. Jones, Capt, USAF AFIT-ENG-MS-15-M-020 DEPARTMENT...Government and is not subject to copyright protection in the United States. AFIT-ENG-MS-15-M-020 THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH...DISTRIBUTION UNLIMITED. AFIT-ENG-MS-15-M-020 THEORETICAL LIMITS OF LUNAR VISION AIDED NAVIGATION WITH INERTIAL NAVIGATION SYSTEM THESIS David W. Jones

  7. A flexible approach to light pen calibration for a monocular-vision-based coordinate measuring system

    International Nuclear Information System (INIS)

    Fu, Shuai; Zhang, Liyan; Ye, Nan; Liu, Shenglan; Zhang, WeiZhong

    2014-01-01

    A monocular-vision-based coordinate measuring system (MVB-CMS) obtains the 3D coordinates of the probe tip center of a light pen by analyzing the monocular image of the target points on the light pen. The light pen calibration, including the target point calibration and the probe tip center calibration, is critical to guarantee the accuracy of the MVB-CMS. The currently used method resorts to special equipment to calibrate the feature points on the light pen in a separate offsite procedure and uses the system camera to calibrate the probe tip center onsite. Instead, a complete onsite light pen calibration method is proposed in this paper. It needs only several auxiliary target points with the same visual features of the light pen targets and two or more cone holes with known distance(s). The target point calibration and the probe tip center calibration are jointly implemented by simply taking two groups of images of the light pen with the camera of the system. The proposed method requires no extra equipment other than the system camera for the calibration, so it is easier to implement and flexible for use. It has been incorporated in a large field-of-view MVB-CMS, which uses active luminous infrared LEDs as the target points. Experimental results demonstrate the accuracy and effectiveness of the proposed method. (paper)

  8. An Analytical Measuring Rectification Algorithm of Monocular Systems in Dynamic Environment

    Directory of Open Access Journals (Sweden)

    Deshi Li

    2016-01-01

    Full Text Available Range estimation is crucial for maintaining a safe distance, in particular for vision navigation and localization. Monocular autonomous vehicles are appropriate for outdoor environment due to their mobility and operability. However, accurate range estimation using vision system is challenging because of the nonholonomic dynamics and susceptibility of vehicles. In this paper, a measuring rectification algorithm for range estimation under shaking conditions is designed. The proposed method focuses on how to estimate range using monocular vision when a shake occurs and the algorithm only requires the pose variations of the camera to be acquired. Simultaneously, it solves the problem of how to assimilate results from different kinds of sensors. To eliminate measuring errors by shakes, we establish a pose-range variation model. Afterwards, the algebraic relation between distance increment and a camera’s poses variation is formulated. The pose variations are presented in the form of roll, pitch, and yaw angle changes to evaluate the pixel coordinate incensement. To demonstrate the superiority of our proposed algorithm, the approach is validated in a laboratory environment using Pioneer 3-DX robots. The experimental results demonstrate that the proposed approach improves in the range accuracy significantly.

  9. Self-supervised learning as an enabling technology for future space exploration robots: ISS experiments on monocular distance learning

    Science.gov (United States)

    van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario

    2017-11-01

    Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.

  10. Navigation integrity monitoring and obstacle detection for enhanced-vision systems

    Science.gov (United States)

    Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter

    2001-08-01

    Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our

  11. Vision Sensor-Based Road Detection for Field Robot Navigation

    Directory of Open Access Journals (Sweden)

    Keyu Lu

    2015-11-01

    Full Text Available Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art.

  12. A method of real-time detection for distant moving obstacles by monocular vision

    Science.gov (United States)

    Jia, Bao-zhi; Zhu, Ming

    2013-12-01

    In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.

  13. Vision-Based SLAM System for Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-03-01

    Full Text Available The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs. The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i an orientation sensor (AHRS; (ii a position sensor (GPS; and (iii a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  14. Vision-Based SLAM System for Unmanned Aerial Vehicles.

    Science.gov (United States)

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-03-15

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

  15. [Acute monocular loss of vision : Differential diagnostic considerations apart from the internistic etiological clarification].

    Science.gov (United States)

    Rickmann, A; Macek, M A; Szurman, P; Boden, K

    2017-08-03

    We report the case of acute painless monocular loss of vision in a 53-year-old man. An interdisciplinary etiological evaluation remained without pathological findings with respect to arterial branch occlusion. A reevaluation of the patient history led to a possible association with the administration of phosphodiesterase type 5 inhibitor (PDE5 inhibitor). A critical review of the literature on PDE5 inhibitor administration with ocular participation was performed.

  16. Multi-focal Vision and Gaze Control Improve Navigation Performance

    Directory of Open Access Journals (Sweden)

    Kolja Kuehnlenz

    2008-11-01

    Full Text Available Multi-focal vision systems comprise cameras with various fields of view and measurement accuracies. This article presents a multi-focal approach to localization and mapping of mobile robots with active vision. An implementation of the novel concept is done considering a humanoid robot navigation scenario where the robot is visually guided through a structured environment with several landmarks. Various embodiments of multi-focal vision systems are investigated and the impact on navigation performance is evaluated in comparison to a conventional mono-focal stereo set-up. The comparative studies clearly show the benefits of multi-focal vision for mobile robot navigation: flexibility to assign the different available sensors optimally in each situation, enhancement of the visible field, higher localization accuracy, and, thus, better task performance, i.e. path following behavior of the mobile robot. It is shown that multi-focal vision may strongly improve navigation performance.

  17. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.

    Science.gov (United States)

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-28

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.

  18. Vision enhanced navigation for unmanned systems

    Science.gov (United States)

    Wampler, Brandon Loy

    A vision based simultaneous localization and mapping (SLAM) algorithm is evaluated for use on unmanned systems. SLAM is a technique used by a vehicle to build a map of an environment while concurrently keeping track of its location within the map, without a priori knowledge. The work in this thesis is focused on using SLAM as a navigation solution when global positioning system (GPS) service is degraded or temporarily unavailable. Previous work on unmanned systems that lead up to the determination that a better navigation solution than GPS alone is first presented. This previous work includes control of unmanned systems, simulation, and unmanned vehicle hardware testing. The proposed SLAM algorithm follows the work originally developed by Davidson et al. in which they dub their algorithm MonoSLAM [1--4]. A new approach using the Pyramidal Lucas-Kanade feature tracking algorithm from Intel's OpenCV (open computer vision) library is presented as a means of keeping correct landmark correspondences as the vehicle moves through the scene. Though this landmark tracking method is unusable for long term SLAM due to its inability to recognize revisited landmarks, as opposed to the Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), its computational efficiency makes it a good candidate for short term navigation between GPS position updates. Additional sensor information is then considered by fusing INS and GPS information into the SLAM filter. The SLAM system, in its vision only and vision/IMU form, is tested on a table top, in an open room, and finally in an outdoor environment. For the outdoor environment, a form of the slam algorithm that fuses vision, IMU, and GPS information is tested. The proposed SLAM algorithm, and its several forms, are implemented in C++ using an Extended Kalman Filter (EKF). Experiments utilizing a live video feed from a webcam are performed. The different forms of the filter are compared and conclusions are made on

  19. Cross-orientation masking in human color vision: application of a two-stage model to assess dichoptic and monocular sources of suppression.

    Science.gov (United States)

    Kim, Yeon Jin; Gheiratmand, Mina; Mullen, Kathy T

    2013-05-28

    Cross-orientation masking (XOM) occurs when the detection of a test grating is masked by a superimposed grating at an orthogonal orientation, and is thought to reveal the suppressive effects mediating contrast normalization. Medina and Mullen (2009) reported that XOM was greater for chromatic than achromatic stimuli at equivalent spatial and temporal frequencies. Here we address whether the greater suppression found in binocular color vision originates from a monocular or interocular site, or both. We measure monocular and dichoptic masking functions for red-green color contrast and achromatic contrast at three different spatial frequencies (0.375, 0.75, and 1.5 cpd, 2 Hz). We fit these functions with a modified two-stage masking model (Meese & Baker, 2009) to extract the monocular and interocular weights of suppression. We find that the weight of monocular suppression is significantly higher for color than achromatic contrast, whereas dichoptic suppression is similar for both. These effects are invariant across spatial frequency. We then apply the model to the binocular masking data using the measured values of the monocular and interocular sources of suppression and show that these are sufficient to account for color binocular masking. We conclude that the greater strength of chromatic XOM has a monocular origin that transfers through to the binocular site.

  20. Design and Implementation of a Fully Autonomous UAV's Navigator Based on Omni-directional Vision System

    Directory of Open Access Journals (Sweden)

    Seyed Mohammadreza Kasaei

    2011-12-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are the subject of an increasing interest in many applications . UAVs are seeing more widespread use in military, scenic, and civilian sectors in recent years. Autonomy is one of the major advantages of these vehicles. It is then necessary to develop particular sensor in order to provide efficient navigation functions. The helicopter has been stabilized with visual information through the control loop. Omni directional vision can be a useful sensor for this propose. It can be used as the only sensor or as complementary sensor. In this paper , we propose a novel method for path planning on an UAV based on electrical potential .We are using an omni directional vision system for navigating and path planning.

  1. Generalization of Figure-Ground Segmentation from Binocular to Monocular Vision in an Embodied Biological Brain Model

    Science.gov (United States)

    2011-08-01

    figure and ground the luminance cue breaks down and gestalt contours can fail to pop out. In this case we rely on color, which, having weak stereopsis...REPORT Generalization of Figure - Ground Segmentation from Monocular to Binocular Vision in an Embodied Biological Brain Model 14. ABSTRACT 16. SECURITY...U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS figure - ground , neural network, object

  2. Adaptive Monocular Visual-Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices.

    Science.gov (United States)

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-11-07

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual-inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual-inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual-inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual-inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  3. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    Directory of Open Access Journals (Sweden)

    Jin-Chun Piao

    2017-11-01

    Full Text Available Simultaneous localization and mapping (SLAM is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method.

  4. Adaptive Monocular Visual–Inertial SLAM for Real-Time Augmented Reality Applications in Mobile Devices

    Science.gov (United States)

    Piao, Jin-Chun; Kim, Shin-Dug

    2017-01-01

    Simultaneous localization and mapping (SLAM) is emerging as a prominent issue in computer vision and next-generation core technology for robots, autonomous navigation and augmented reality. In augmented reality applications, fast camera pose estimation and true scale are important. In this paper, we present an adaptive monocular visual–inertial SLAM method for real-time augmented reality applications in mobile devices. First, the SLAM system is implemented based on the visual–inertial odometry method that combines data from a mobile device camera and inertial measurement unit sensor. Second, we present an optical-flow-based fast visual odometry method for real-time camera pose estimation. Finally, an adaptive monocular visual–inertial SLAM is implemented by presenting an adaptive execution module that dynamically selects visual–inertial odometry or optical-flow-based fast visual odometry. Experimental results show that the average translation root-mean-square error of keyframe trajectory is approximately 0.0617 m with the EuRoC dataset. The average tracking time is reduced by 7.8%, 12.9%, and 18.8% when different level-set adaptive policies are applied. Moreover, we conducted experiments with real mobile device sensors, and the results demonstrate the effectiveness of performance improvement using the proposed method. PMID:29112143

  5. Intelligent Vision System for Door Sensing Mobile Robot

    Directory of Open Access Journals (Sweden)

    Jharna Majumdar

    2012-08-01

    Full Text Available Wheeled Mobile Robots find numerous applications in the Indoor man made structured environments. In order to operate effectively, the robots must be capable of sensing its surroundings. Computer Vision is one of the prime research areas directed towards achieving these sensing capabilities. In this paper, we present a Door Sensing Mobile Robot capable of navigating in the indoor environment. A robust and inexpensive approach for recognition and classification of the door, based on monocular vision system helps the mobile robot in decision making. To prove the efficacy of the algorithm we have designed and developed a ‘Differentially’ Driven Mobile Robot. A wall following behavior using Ultra Sonic range sensors is employed by the mobile robot for navigation in the corridors.  Field Programmable Gate Arrays (FPGA have been used for the implementation of PD Controller for wall following and PID Controller to control the speed of the Geared DC Motor.

  6. Discrete-State-Based Vision Navigation Control Algorithm for One Bipedal Robot

    Directory of Open Access Journals (Sweden)

    Dunwen Wei

    2015-01-01

    Full Text Available Navigation with the specific objective can be defined by specifying desired timed trajectory. The concept of desired direction field is proposed to deal with such navigation problem. To lay down a principled discussion of the accuracy and efficiency of navigation algorithms, strictly quantitative definitions of tracking error, actuator effect, and time efficiency are established. In this paper, one vision navigation control method based on desired direction field is proposed. This proposed method uses discrete image sequences to form discrete state space, which is especially suitable for bipedal walking robots with single camera walking on a free-barrier plane surface to track the specific objective without overshoot. The shortest path method (SPM is proposed to design such direction field with the highest time efficiency. However, one improved control method called canonical piecewise-linear function (PLF is proposed. In order to restrain the noise disturbance from the camera sensor, the band width control method is presented to significantly decrease the error influence. The robustness and efficiency of the proposed algorithm are illustrated through a number of computer simulations considering the error from camera sensor. Simulation results show that the robustness and efficiency can be balanced by choosing the proper controlling value of band width.

  7. Effects of brief daily periods of unrestricted vision during early monocular form deprivation on development of visual area 2.

    Science.gov (United States)

    Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Harwerth, Ronald S; Smith, Earl L; Chino, Yuzo M

    2011-09-14

    Providing brief daily periods of unrestricted vision during early monocular form deprivation reduces the depth of amblyopia. To gain insights into the neural basis of the beneficial effects of this treatment, the binocular and monocular response properties of neurons were quantitatively analyzed in visual area 2 (V2) of form-deprived macaque monkeys. Beginning at 3 weeks of age, infant monkeys were deprived of clear vision in one eye for 12 hours every day until 21 weeks of age. They received daily periods of unrestricted vision for 0, 1, 2, or 4 hours during the form-deprivation period. After behavioral testing to measure the depth of the resulting amblyopia, microelectrode-recording experiments were conducted in V2. The ocular dominance imbalance away from the affected eye was reduced in the experimental monkeys and was generally proportional to the reduction in the depth of amblyopia in individual monkeys. There were no interocular differences in the spatial properties of V2 neurons in any subject group. However, the binocular disparity sensitivity of V2 neurons was significantly higher and binocular suppression was lower in monkeys that had unrestricted vision. The decrease in ocular dominance imbalance in V2 was the neuronal change most closely associated with the observed reduction in the depth of amblyopia. The results suggest that the degree to which extrastriate neurons can maintain functional connections with the deprived eye (i.e., reducing undersampling for the affected eye) is the most significant factor associated with the beneficial effects of brief periods of unrestricted vision.

  8. Vision Assisted Laser Scanner Navigation for Autonomous Robots

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Andersen, Nils Axel; Ravn, Ole

    2008-01-01

    This paper describes a navigation method based on road detection using both a laser scanner and a vision sensor. The method is to classify the surface in front of the robot into traversable segments (road) and obstacles using the laser scanner, this classifies the area just in front of the robot ...

  9. An Approach for Environment Mapping and Control of Wall Follower Cellbot Through Monocular Vision and Fuzzy System

    OpenAIRE

    Farias, Karoline de M.; Rodrigues Junior, WIlson Leal; Bezerra Neto, Ranulfo P.; Rabelo, Ricardo A. L.; Santana, Andre M.

    2017-01-01

    This paper presents an approach using range measurement through homography calculation to build 2D visual occupancy grid and control the robot through monocular vision. This approach is designed for a Cellbot architecture. The robot is equipped with wall following behavior to explore the environment, which enables the robot to trail objects contours, residing in the fuzzy control the responsibility to provide commands for the correct execution of the robot movements while facing the advers...

  10. Simplification of Visual Rendering in Simulated Prosthetic Vision Facilitates Navigation.

    Science.gov (United States)

    Vergnieux, Victor; Macé, Marc J-M; Jouffrais, Christophe

    2017-09-01

    Visual neuroprostheses are still limited and simulated prosthetic vision (SPV) is used to evaluate potential and forthcoming functionality of these implants. SPV has been used to evaluate the minimum requirement on visual neuroprosthetic characteristics to restore various functions such as reading, objects and face recognition, object grasping, etc. Some of these studies focused on obstacle avoidance but only a few investigated orientation or navigation abilities with prosthetic vision. The resolution of current arrays of electrodes is not sufficient to allow navigation tasks without additional processing of the visual input. In this study, we simulated a low resolution array (15 × 18 electrodes, similar to a forthcoming generation of arrays) and evaluated the navigation abilities restored when visual information was processed with various computer vision algorithms to enhance the visual rendering. Three main visual rendering strategies were compared to a control rendering in a wayfinding task within an unknown environment. The control rendering corresponded to a resizing of the original image onto the electrode array size, according to the average brightness of the pixels. In the first rendering strategy, vision distance was limited to 3, 6, or 9 m, respectively. In the second strategy, the rendering was not based on the brightness of the image pixels, but on the distance between the user and the elements in the field of view. In the last rendering strategy, only the edges of the environments were displayed, similar to a wireframe rendering. All the tested renderings, except the 3 m limitation of the viewing distance, improved navigation performance and decreased cognitive load. Interestingly, the distance-based and wireframe renderings also improved the cognitive mapping of the unknown environment. These results show that low resolution implants are usable for wayfinding if specific computer vision algorithms are used to select and display appropriate

  11. Monocular Vision System for Fixed Altitude Flight of Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Kuo-Lung Huang

    2015-07-01

    Full Text Available The fastest and most economical method of acquiring terrain images is aerial photography. The use of unmanned aerial vehicles (UAVs has been investigated for this task. However, UAVs present a range of challenges such as flight altitude maintenance. This paper reports a method that combines skyline detection with a stereo vision algorithm to enable the flight altitude of UAVs to be maintained. A monocular camera is mounted on the downside of the aircraft’s nose to collect continuous ground images, and the relative altitude is obtained via a stereo vision algorithm from the velocity of the UAV. Image detection is used to obtain terrain images, and to measure the relative altitude from the ground to the UAV. The UAV flight system can be set to fly at a fixed and relatively low altitude to obtain the same resolution of ground images. A forward-looking camera is mounted on the upside of the aircraft’s nose. In combination with the skyline detection algorithm, this helps the aircraft to maintain a stable flight pattern. Experimental results show that the proposed system enables UAVs to obtain terrain images at constant resolution, and to detect the relative altitude along the flight path.

  12. Vision based systems for UAV applications

    CERN Document Server

    Kuś, Zygmunt

    2013-01-01

    This monograph is motivated by a significant number of vision based algorithms for Unmanned Aerial Vehicles (UAV) that were developed during research and development projects. Vision information is utilized in various applications like visual surveillance, aim systems, recognition systems, collision-avoidance systems and navigation. This book presents practical applications, examples and recent challenges in these mentioned application fields. The aim of the book is to create a valuable source of information for researchers and constructors of solutions utilizing vision from UAV. Scientists, researchers and graduate students involved in computer vision, image processing, data fusion, control algorithms, mechanics, data mining, navigation and IC can find many valuable, useful and practical suggestions and solutions. The latest challenges for vision based systems are also presented.

  13. Challenges of pin-point landing for planetary landing: the LION absolute vision-based navigation approach and experimental results

    OpenAIRE

    Voirin, Thomas; Delaune, Jeff; Le Besnerais, Guy; Farges, Jean Loup; Bourdarias, Clément; Krüger, Hans

    2013-01-01

    After ExoMars in 2016 and 2018, future ESA missions to Mars, the Moon, or asteroids will require safe and pinpoint precision landing capabilities, with for example a specified accuracy of typically 100 m at touchdown for a Moon landing. The safe landing requirement can be met thanks to state-of-the-art Terrain-Relative Navigation (TRN) sensors such as Wide-Field-of-View vision-based navigation cameras (VBNC), with appropriate hazard detection and avoidance algorithms. To reach the pinpoint pr...

  14. Gain-scheduling control of a monocular vision-based human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-08-01

    Full Text Available , R. and Zisserman, A. (2004). Multiple View Geometry in Computer Vision. Cambridge University Press, 2nd edition. Hutchinson, S., Hager, G., and Corke, P. (1996). A tutorial on visual servo control. IEEE Trans. on Robotics and Automation, 12... environment, in a passive manner, at relatively high speeds and low cost. The control of mobile robots using vision in the feed- back loop falls into the well-studied field of visual servo control. Two primary approaches are used: image-based visual...

  15. Monocular and binocular development in children with albinism, infantile nystagmus syndrome, and normal vision.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke

    2013-12-01

    To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Interocular acuity differences and binocular summation ratios were compared between groups. Crowding ratios were calculated by dividing the single Landolt C decimal acuity with the crowded Landolt C decimal acuity mono- and binocularly. A linear regression analysis was conducted to investigate the contribution of 5 predictors to the monocular and binocular crowding ratio: nystagmus amplitude, nystagmus frequency, strabismus, astigmatism, and anisometropia. Crowding ratios were higher under mono- and binocular viewing conditions for children with infantile nystagmus syndrome than for children with normal vision. Children with albinism showed higher crowding ratios in their poorer eye and under binocular viewing conditions than children with normal vision. Children with albinism and children with infantile nystagmus syndrome showed larger interocular acuity differences than children with normal vision (0.1 logMAR in our clinical groups and 0.0 logMAR in children with normal vision). Binocular summation ratios did not differ between groups. Strabismus and nystagmus amplitude predicted the crowding ratio in the poorer eye (p = 0.015 and p = 0.005, respectively). The crowding ratio in the better eye showed a marginally significant relation with nystagmus frequency and depth of anisometropia (p = 0.082 and p = 0.070, respectively). The binocular crowding ratio was not predicted by any of the variables. Children with albinism and children with infantile nystagmus syndrome show larger interocular acuity differences than children with normal vision. Strabismus and nystagmus amplitude are significant predictors of the crowding ratio in the poorer eye.

  16. Grey and white matter changes in children with monocular amblyopia: voxel-based morphometry and diffusion tensor imaging study.

    Science.gov (United States)

    Li, Qian; Jiang, Qinying; Guo, Mingxia; Li, Qingji; Cai, Chunquan; Yin, Xiaohui

    2013-04-01

    To investigate the potential morphological alterations of grey and white matter in monocular amblyopic children using voxel-based morphometry (VBM) and diffusion tensor imaging (DTI). A total of 20 monocular amblyopic children and 20 age-matched controls were recruited. Whole-brain MRI scans were performed after a series of ophthalmologic exams. The imaging data were processed and two-sample t-tests were employed to identify group differences in grey matter volume (GMV), white matter volume (WMV) and fractional anisotropy (FA). After image screening, there were 12 amblyopic participants and 15 normal controls qualified for the VBM analyses. For DTI analysis, 14 amblyopes and 14 controls were included. Compared to the normal controls, reduced GMVs were observed in the left inferior occipital gyrus, the bilateral parahippocampal gyrus and the left supramarginal/postcentral gyrus in the monocular amblyopic group, with the lingual gyrus presenting augmented GMV. Meanwhile, WMVs reduced in the left calcarine, the bilateral inferior frontal and the right precuneus areas, and growth in the WMVs was seen in the right cuneus, right middle occipital and left orbital frontal areas. Diminished FA values in optic radiation and increased FA in the left middle occipital area and right precuneus were detected in amblyopic patients. In monocular amblyopia, cortices related to spatial vision underwent volume loss, which provided neuroanatomical evidence of stereoscopic defects. Additionally, white matter development was also hindered due to visual defects in amblyopes. Growth in the GMVs, WMVs and FA in the occipital lobe and precuneus may reflect a compensation effect by the unaffected eye in monocular amblyopia.

  17. Vision-based Navigation and Reinforcement Learning Path Finding for Social Robots

    OpenAIRE

    Pérez Sala, Xavier

    2010-01-01

    We propose a robust system for automatic Robot Navigation in uncontrolled en- vironments. The system is composed by three main modules: the Arti cial Vision module, the Reinforcement Learning module, and the behavior control module. The aim of the system is to allow a robot to automatically nd a path that arrives to a pre xed goal. Turn and straight movements in uncontrolled environments are automatically estimated and controlled using the proposed modules. The Arti cial Vi...

  18. Vision-Based Leader Vehicle Trajectory Tracking for Multiple Agricultural Vehicles.

    Science.gov (United States)

    Zhang, Linhuan; Ahamed, Tofael; Zhang, Yan; Gao, Pengbo; Takigawa, Tomohiro

    2016-04-22

    The aim of this study was to design a navigation system composed of a human-controlled leader vehicle and a follower vehicle. The follower vehicle automatically tracks the leader vehicle. With such a system, a human driver can control two vehicles efficiently in agricultural operations. The tracking system was developed for the leader and the follower vehicle, and control of the follower was performed using a camera vision system. A stable and accurate monocular vision-based sensing system was designed, consisting of a camera and rectangular markers. Noise in the data acquisition was reduced by using the least-squares method. A feedback control algorithm was used to allow the follower vehicle to track the trajectory of the leader vehicle. A proportional-integral-derivative (PID) controller was introduced to maintain the required distance between the leader and the follower vehicle. Field experiments were conducted to evaluate the sensing and tracking performances of the leader-follower system while the leader vehicle was driven at an average speed of 0.3 m/s. In the case of linear trajectory tracking, the RMS errors were 6.5 cm, 8.9 cm and 16.4 cm for straight, turning and zigzag paths, respectively. Again, for parallel trajectory tracking, the root mean square (RMS) errors were found to be 7.1 cm, 14.6 cm and 14.0 cm for straight, turning and zigzag paths, respectively. The navigation performances indicated that the autonomous follower vehicle was able to follow the leader vehicle, and the tracking accuracy was found to be satisfactory. Therefore, the developed leader-follower system can be implemented for the harvesting of grains, using a combine as the leader and an unloader as the autonomous follower vehicle.

  19. Parallel Tracking and Mapping for Controlling VTOL Airframe

    Directory of Open Access Journals (Sweden)

    Michal Jama

    2011-01-01

    Full Text Available This work presents a vision based system for navigation on a vertical takeoff and landing unmanned aerial vehicle (UAV. This is a monocular vision based, simultaneous localization and mapping (SLAM system, which measures the position and orientation of the camera and builds a map of the environment using a video stream from a single camera. This is different from past SLAM solutions on UAV which use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. Solution presented in this paper extends and significantly modifies a recent open-source algorithm that solves SLAM problem using approach fundamentally different from a traditional approach. Proposed modifications provide the position measurements necessary for the navigation solution on a UAV. The main contributions of this work include: (1 extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; (2 improved performance of the SLAM algorithm for lower camera frame rates; and (3 the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible.

  20. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  1. The monocular visual imaging technology model applied in the airport surface surveillance

    Science.gov (United States)

    Qin, Zhe; Wang, Jian; Huang, Chao

    2013-08-01

    At present, the civil aviation airports use the surface surveillance radar monitoring and positioning systems to monitor the aircrafts, vehicles and the other moving objects. Surface surveillance radars can cover most of the airport scenes, but because of the terminals, covered bridges and other buildings geometry, surface surveillance radar systems inevitably have some small segment blind spots. This paper presents a monocular vision imaging technology model for airport surface surveillance, achieving the perception of scenes of moving objects such as aircrafts, vehicles and personnel location. This new model provides an important complement for airport surface surveillance, which is different from the traditional surface surveillance radar techniques. Such technique not only provides clear objects activities screen for the ATC, but also provides image recognition and positioning of moving targets in this area. Thereby it can improve the work efficiency of the airport operations and avoid the conflict between the aircrafts and vehicles. This paper first introduces the monocular visual imaging technology model applied in the airport surface surveillance and then the monocular vision measurement accuracy analysis of the model. The monocular visual imaging technology model is simple, low cost, and highly efficient. It is an advanced monitoring technique which can make up blind spot area of the surface surveillance radar monitoring and positioning systems.

  2. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss

    Science.gov (United States)

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960’s on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research. PMID:25972788

  3. Deviation from Trajectory Detection in Vision based Robotic Navigation using SURF and Subsequent Restoration by Dynamic Auto Correction Algorithm

    Directory of Open Access Journals (Sweden)

    Ray Debraj

    2015-01-01

    Full Text Available Speeded Up Robust Feature (SURF is used to position a robot with respect to an environment and aid in vision-based robotic navigation. During the course of navigation irregularities in the terrain, especially in an outdoor environment may deviate a robot from the track. Another reason for deviation can be unequal speed of the left and right robot wheels. Hence it is essential to detect such deviations and perform corrective operations to bring the robot back to the track. In this paper we propose a novel algorithm that uses image matching using SURF to detect deviation of a robot from the trajectory and subsequent restoration by corrective operations. This algorithm is executed in parallel to positioning and navigation algorithms by distributing tasks among different CPU cores using Open Multi-Processing (OpenMP API.

  4. A Visual-Aided Inertial Navigation and Mapping System

    Directory of Open Access Journals (Sweden)

    Rodrigo Munguía

    2016-05-01

    Full Text Available State estimation is a fundamental necessity for any application involving autonomous robots. This paper describes a visual-aided inertial navigation and mapping system for application to autonomous robots. The system, which relies on Kalman filtering, is designed to fuse the measurements obtained from a monocular camera, an inertial measurement unit (IMU and a position sensor (GPS. The estimated state consists of the full state of the vehicle: the position, orientation, their first derivatives and the parameter errors of the inertial sensors (i.e., the bias of gyroscopes and accelerometers. The system also provides the spatial locations of the visual features observed by the camera. The proposed scheme was designed by considering the limited resources commonly available in small mobile robots, while it is intended to be applied to cluttered environments in order to perform fully vision-based navigation in periods where the position sensor is not available. Moreover, the estimated map of visual features would be suitable for multiple tasks: i terrain analysis; ii three-dimensional (3D scene reconstruction; iii localization, detection or perception of obstacles and generating trajectories to navigate around these obstacles; and iv autonomous exploration. In this work, simulations and experiments with real data are presented in order to validate and demonstrate the performance of the proposal.

  5. Vision-based mapping with cooperative robots

    Science.gov (United States)

    Little, James J.; Jennings, Cullen; Murray, Don

    1998-10-01

    Two stereo-vision-based mobile robots navigate and autonomously explore their environment safely while building occupancy grid maps of the environment. The robots maintain position estimates within a global coordinate frame using landmark recognition. This allows them to build a common map by sharing position information and stereo data. Stereo vision processing and map updates are done at 3 Hz and the robots move at speeds of 200 cm/s. Cooperative mapping is achieved through autonomous exploration of unstructured and dynamic environments. The map is constructed conservatively, so as to be useful for collision-free path planning. Each robot maintains a separate copy of a shared map, and then posts updates to the common map when it returns to observe a landmark at home base. Issues include synchronization, mutual localization, navigation, exploration, registration of maps, merging repeated views (fusion), centralized vs decentralized maps.

  6. Anisometropia and ptosis in patients with monocular elevation deficiency

    International Nuclear Information System (INIS)

    Zafar, S.N.; Islam, F.; Khan, A.M.

    2016-01-01

    Objective: To determine the effect of ptosis on the refractive error in eyes having monocular elevation deficiency Place and Duration of Study: Al-Shifa Trust Eye Hospital, Rawalpindi, from January 2011 to January 2014. Methodology: Visual acuity, refraction, orthoptic assessment and ptosis evaluation of all patients having monocular elevation deficiency (MED) were recorded. Shapiro-Wilk test was used for tests of normality. Median and interquartile range (IQR) was calculated for the data. Non-parametric variables were compared, using the Wilcoxon signed ranks test. P-values of <0.05 were considered significant. Results: A total of of 41 MED patients were assessed during the study period. Best corrected visual acuity (BCVA) and refractive error was compared between the eyes having MED and the unaffected eyes of the same patient. The refractive status of patients having ptosis with MED were also compared with those having MED without ptosis. Astigmatic correction and vision had significant difference between both the eyes of the patients. Vision was significantly different between the two eyes of patients in both the groups having either presence or absence of ptosis (p=0.04 and p < 0.001, respectively). Conclusion: Significant difference in vision and anisoastigmatism was noted between the two eyes of patients with MED in this study. The presence or absence of ptosis affected the vision but did not have a significant effect on the spherical equivalent (SE) and astigmatic correction between both the eyes. (author)

  7. Estimated Prevalence of Monocular Blindness and Monocular ...

    African Journals Online (AJOL)

    with MB/MSVI; among the 109 (51%) children with MB/MSVI that had a known etiology, trauma. Table 1: Major anatomical site of monocular blindness and monocular severe visual impairment in children. Anatomical cause. Total (%). Corneal scar. 89 (42). Whole globe. 43 (20). Lens. 42 (19). Amblyopia. 16 (8). Retina. 9 (4).

  8. Autonomous navigation of the vehicle with vision system. Vision system wo motsu sharyo no jiritsu soko seigyo

    Energy Technology Data Exchange (ETDEWEB)

    Yatabe, T.; Hirose, T.; Tsugawa, S. (Mechanical Engineering Laboratory, Tsukuba (Japan))

    1991-11-10

    As part of the automatic driving system researches, a pilot driverless automobile was built and discussed, which is equipped with obstacle detection and automatic navigating functions without depending on ground facilities including guiding cables. A small car was mounted with a vision system to recognize obstacles three-dimensionally by means of two TV cameras, and a dead reckoning system to calculate the car position and direction from speeds of the rear wheels on a real time basis. The control algorithm, which recognizes obstacles and road range on the vision and drives the car automatically, uses a table-look-up method that retrieves a table stored with the necessary driving amount based on data from the vision system. The steering uses the target point following method algorithm provided that the has a map. As a result of driving tests, useful knowledges were obtained that the system meets the basic functions, but needs a few improvements because of it being an open loop. 36 refs., 22 figs., 2 tabs.

  9. Vision-aided inertial navigation system for robotic mobile mapping

    Science.gov (United States)

    Bayoud, Fadi; Skaloud, Jan

    2008-04-01

    A mapping system by vision-aided inertial navigation was developed for areas where GNSS signals are unreachable. In this framework, a methodology on the integration of vision and inertial sensors is presented, analysed and tested. The system employs the method of “SLAM: Simultaneous Localisation And Mapping” where the only external input available to the system at the beginning of the mapping mission is a number of features with known coordinates. SLAM is a term used in the robotics community to describe the problem of mapping the environment and at the same time using this map to determine the location of the mapping device. Differing from the robotics approach, the presented development stems from the frameworks of photogrammetry and kinematic geodesy that are merged in two filters that run in parallel: the Least-Squares Adjustment (LSA) for features coordinates determination and the Kalman filter (KF) for navigation correction. To test this approach, a mapping system-prototype comprising two CCD cameras and one Inertial Measurement Unit (IMU) is introduced. Conceptually, the outputs of the LSA photogrammetric resection are used as the external measurements for the KF that corrects the inertial navigation. The filtered position and orientation are subsequently employed in the photogrammetric intersection to map the surrounding features that are used as control points for the resection in the next epoch. We confirm empirically the dependency of navigation performance on the quality of the images and the number of tracked features, as well as on the geometry of the stereo-pair. Due to its autonomous nature, the SLAM's performance is further affected by the quality of IMU initialisation and the a-priory assumptions on error distribution. Using the example of the presented system we show that centimetre accuracy can be achieved in both navigation and mapping when the image geometry is optimal.

  10. Visual Suppression of Monocularly Presented Symbology Against a Fused Background in a Simulation and Training Environment

    National Research Council Canada - National Science Library

    Winterbottom, Marc D; Patterson, Robert; Pierce, Byron J; Taylor, Amanda

    2006-01-01

    .... This may create interocular differences in image characteristics that could disrupt binocular vision by provoking visual suppression, thus reducing visibility of the background scene, monocular symbology...

  11. Monocular zones in stereoscopic scenes: A useful source of information for human binocular vision?

    Science.gov (United States)

    Harris, Julie M.

    2010-02-01

    When an object is closer to an observer than the background, the small differences between right and left eye views are interpreted by the human brain as depth. This basic ability of the human visual system, called stereopsis, lies at the core of all binocular three-dimensional (3-D) perception and related technological display development. To achieve stereopsis, it is traditionally assumed that corresponding locations in the right and left eye's views must first be matched, then the relative differences between right and left eye locations are used to calculate depth. But this is not the whole story. At every object-background boundary, there are regions of the background that only one eye can see because, in the other eye's view, the foreground object occludes that region of background. Such monocular zones do not have a corresponding match in the other eye's view and can thus cause problems for depth extraction algorithms. In this paper I will discuss evidence, from our knowledge of human visual perception, illustrating that monocular zones do not pose problems for our human visual systems, rather, our visual systems can extract depth from such zones. I review the relevant human perception literature in this area, and show some recent data aimed at quantifying the perception of depth from monocular zones. The paper finishes with a discussion of the potential importance of considering monocular zones, for stereo display technology and depth compression algorithms.

  12. AN AUTONOMOUS GPS-DENIED UNMANNED VEHICLE PLATFORM BASED ON BINOCULAR VISION FOR PLANETARY EXPLORATION

    Directory of Open Access Journals (Sweden)

    M. Qin

    2018-04-01

    Full Text Available Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching based VO (Visual Odometry software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.

  13. An Autonomous Gps-Denied Unmanned Vehicle Platform Based on Binocular Vision for Planetary Exploration

    Science.gov (United States)

    Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.

    2018-04-01

    Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.

  14. Estimating Target Orientation with a Single Camera for Use in a Human-Following Robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2010-11-01

    Full Text Available This paper presents a monocular vision-based technique for extracting orientation information from a human torso for use in a robotic human-follower. Typical approaches to human-following use an estimate of only human position for navigation...

  15. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    Science.gov (United States)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  16. Autonomous vision-based navigation for proximity operations around binary asteroids

    Science.gov (United States)

    Gil-Fernandez, Jesus; Ortega-Hernando, Guillermo

    2018-06-01

    Future missions to small bodies demand higher level of autonomy in the Guidance, Navigation and Control system for higher scientific return and lower operational costs. Different navigation strategies have been assessed for ESA's asteroid impact mission (AIM). The main objective of AIM is the detailed characterization of binary asteroid Didymos. The trajectories for the proximity operations shall be intrinsically safe, i.e., no collision in presence of failures (e.g., spacecraft entering safe mode), perturbations (e.g., non-spherical gravity field), and errors (e.g., maneuver execution error). Hyperbolic arcs with sufficient hyperbolic excess velocity are designed to fulfil the safety, scientific, and operational requirements. The trajectory relative to the asteroid is determined using visual camera images. The ground-based trajectory prediction error at some points is comparable to the camera Field Of View (FOV). Therefore, some images do not contain the entire asteroid. Autonomous navigation can update the state of the spacecraft relative to the asteroid at higher frequency. The objective of the autonomous navigation is to improve the on-board knowledge compared to the ground prediction. The algorithms shall fit in off-the-shelf, space-qualified avionics. This note presents suitable image processing and relative-state filter algorithms for autonomous navigation in proximity operations around binary asteroids.

  17. Position estimation and driving of an autonomous vehicle by monocular vision

    Science.gov (United States)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  18. Image Based Solution to Occlusion Problem for Multiple Robots Navigation

    Directory of Open Access Journals (Sweden)

    Taj Mohammad Khan

    2012-04-01

    Full Text Available In machine vision, occlusions problem is always a challenging issue in image based mapping and navigation tasks. This paper presents a multiple view vision based algorithm for the development of occlusion-free map of the indoor environment. The map is assumed to be utilized by the mobile robots within the workspace. It has wide range of applications, including mobile robot path planning and navigation, access control in restricted areas, and surveillance systems. We used wall mounted fixed camera system. After intensity adjustment and background subtraction of the synchronously captured images, the image registration was performed. We applied our algorithm on the registered images to resolve the occlusion problem. This technique works well even in the existence of total occlusion for a longer period.

  19. ROBERT autonomous navigation robot with artificial vision

    International Nuclear Information System (INIS)

    Cipollini, A.; Meo, G.B.; Nanni, V.; Rossi, L.; Taraglio, S.; Ferjancic, C.

    1993-01-01

    This work, a joint research between ENEA (the Italian National Agency for Energy, New Technologies and the Environment) and DIGlTAL, presents the layout of the ROBERT project, ROBot with Environmental Recognizing Tools, under development in ENEA laboratories. This project aims at the development of an autonomous mobile vehicle able to navigate in a known indoor environment through the use of artificial vision. The general architecture of the robot is shown together with the data and control flow among the various subsystems. Also the inner structure of the latter complete with the functionalities are given in detail

  20. Image-based particle filtering for navigation in a semi-structured agricultural environment

    NARCIS (Netherlands)

    Hiremath, S.; van Evert, F.K.; ter Braak, C.J.F.; Stein, A.; van der Heijden, G.

    2014-01-01

    Autonomous navigation of field robots in an agricultural environment is a difficult task due to the inherent uncertainty in the environment. The drawback of existing systems is the lack of robustness to these uncertainties. In this study we propose a vision-based navigation method to address these

  1. Binocular vision in amblyopia: structure, suppression and plasticity.

    Science.gov (United States)

    Hess, Robert F; Thompson, Benjamin; Baker, Daniel H

    2014-03-01

    The amblyopic visual system was once considered to be structurally monocular. However, it now evident that the capacity for binocular vision is present in many observers with amblyopia. This has led to new techniques for quantifying suppression that have provided insights into the relationship between suppression and the monocular and binocular visual deficits experienced by amblyopes. Furthermore, new treatments are emerging that directly target suppressive interactions within the visual cortex and, on the basis of initial data, appear to improve both binocular and monocular visual function, even in adults with amblyopia. The aim of this review is to provide an overview of recent studies that have investigated the structure, measurement and treatment of binocular vision in observers with strabismic, anisometropic and mixed amblyopia. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  2. Meta-image navigation augmenters for unmanned aircraft systems (MINA for UAS)

    Science.gov (United States)

    Òªelik, Koray; Somani, Arun K.; Schnaufer, Bernard; Hwang, Patrick Y.; McGraw, Gary A.; Nadke, Jeremy

    2013-05-01

    GPS is a critical sensor for Unmanned Aircraft Systems (UASs) due to its accuracy, global coverage and small hardware footprint, but is subject to denial due to signal blockage or RF interference. When GPS is unavailable, position, velocity and attitude (PVA) performance from other inertial and air data sensors is not sufficient, especially for small UASs. Recently, image-based navigation algorithms have been developed to address GPS outages for UASs, since most of these platforms already include a camera as standard equipage. Performing absolute navigation with real-time aerial images requires georeferenced data, either images or landmarks, as a reference. Georeferenced imagery is readily available today, but requires a large amount of storage, whereas collections of discrete landmarks are compact but must be generated by pre-processing. An alternative, compact source of georeferenced data having large coverage area is open source vector maps from which meta-objects can be extracted for matching against real-time acquired imagery. We have developed a novel, automated approach called MINA (Meta Image Navigation Augmenters), which is a synergy of machine-vision and machine-learning algorithms for map aided navigation. As opposed to existing image map matching algorithms, MINA utilizes publicly available open-source geo-referenced vector map data, such as OpenStreetMap, in conjunction with real-time optical imagery from an on-board, monocular camera to augment the UAS navigation computer when GPS is not available. The MINA approach has been experimentally validated with both actual flight data and flight simulation data and results are presented in the paper.

  3. Autonomous Vehicles Navigation with Visual Target Tracking: Technical Approaches

    Directory of Open Access Journals (Sweden)

    Zhen Jia

    2008-12-01

    Full Text Available This paper surveys the developments of last 10 years in the area of vision based target tracking for autonomous vehicles navigation. First, the motivations and applications of using vision based target tracking for autonomous vehicles navigation are presented in the introduction section. It can be concluded that it is very necessary to develop robust visual target tracking based navigation algorithms for the broad applications of autonomous vehicles. Then this paper reviews the recent techniques in three different categories: vision based target tracking for the applications of land, underwater and aerial vehicles navigation. Next, the increasing trends of using data fusion for visual target tracking based autonomous vehicles navigation are discussed. Through data fusion the tracking performance is improved and becomes more robust. Based on the review, the remaining research challenges are summarized and future research directions are investigated.

  4. Ground Stereo Vision-Based Navigation for Autonomous Take-off and Landing of UAVs: A Chan-Vese Model Approach

    Directory of Open Access Journals (Sweden)

    Dengqing Tang

    2016-04-01

    Full Text Available This article aims at flying target detection and localization of a fixed-wing unmanned aerial vehicle (UAV autonomous take-off and landing within Global Navigation Satellite System (GNSS-denied environments. A Chan-Vese model–based approach is proposed and developed for ground stereo vision detection. Extended Kalman Filter (EKF is fused into state estimation to reduce the localization inaccuracy caused by measurement errors of object detection and Pan-Tilt unit (PTU attitudes. Furthermore, the region-of-interest (ROI setting up is conducted to improve the real-time capability. The present work contributes to real-time, accurate and robust features, compared with our previous works. Both offline and online experimental results validate the effectiveness and better performances of the proposed method against the traditional triangulation-based localization algorithm.

  5. Monocular Elevation Deficiency - Double Elevator Palsy

    Science.gov (United States)

    ... Español Condiciones Chinese Conditions Monocular Elevation Deficiency/ Double Elevator Palsy En Español Read in Chinese What is monocular elevation deficiency (Double Elevator Palsy)? Monocular Elevation Deficiency, also known by the ...

  6. Research on position and orientation measurement method for roadheader based on vision/INS

    Science.gov (United States)

    Yang, Jinyong; Zhang, Guanqin; Huang, Zhe; Ye, Yaozhong; Ma, Bowen; Wang, Yizhong

    2018-01-01

    Roadheader which is a kind of special equipment for large tunnel excavation has been widely used in Coal Mine. It is one of the main mechanical-electrical equipment for mine production and also has been regarded as the core equipment for underground tunnel driving construction. With the deep application of the rapid driving system, underground tunnel driving methods with higher automation level are required. In this respect, the real-time position and orientation measurement technique for roadheader is one of the most important research contents. For solving the problem of position and orientation measurement automatically in real time for roadheaders, this paper analyses and compares the features of several existing measuring methods. Then a new method based on the combination of monocular vision and strap down inertial navigation system (SINS) would be proposed. By realizing five degree-of-freedom (DOF) measurement of real-time position and orientation of roadheader, this method has been verified by the rapid excavation equipment in Daliuta coal mine. Experiment results show that the accuracy of orientation measurement is better than 0.1°, the standard deviation of static drift is better than 0.25° and the accuracy of position measurement is better than 1cm. It is proved that this method can be used in real-time position and orientation measurement application for roadheader which has a broad prospect in coal mine engineering.

  7. Vision-based Vehicle Detection Survey

    Directory of Open Access Journals (Sweden)

    Alex David S

    2016-03-01

    Full Text Available Nowadays thousands of drivers and passengers were losing their lives every year on road accident, due to deadly crashes between more than one vehicle. There are number of many research focuses were dedicated to the development of intellectual driver assistance systems and autonomous vehicles over the past decade, which reduces the danger by monitoring the on-road environment. In particular, researchers attracted towards the on-road detection of vehicles in recent years. Different parameters have been analyzed in this paper which includes camera placement and the various applications of monocular vehicle detection, common features and common classification methods, motion- based approaches and nighttime vehicle detection and monocular pose estimation. Previous works on the vehicle detection listed based on camera poisons, feature based detection and motion based detection works and night time detection.

  8. Visual memory for objects following foveal vision loss.

    Science.gov (United States)

    Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B; Pollmann, Stefan

    2015-09-01

    Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual long-term memory for objects in patients suffering from foveal vision loss due to age-related macular degeneration. We measured patients' change detection sensitivity after a period of free scene exploration monocularly with their worse eye when possible, and under binocular vision, comparing sensitivity and eye movements to matched normal-sighted controls. A highly salient cue was used to capture attention to a nontarget location before a target change occurred in half of the trials, ensuring that change detection relied on memory. Patients' monocular and binocular sensitivity to object change was comparable to controls, even after more than 4 intervening fixations, and not significantly correlated with visual impairment. We conclude that extrafoveal vision suffices for efficient encoding into visual long-term memory. (c) 2015 APA, all rights reserved).

  9. Vision based techniques for rotorcraft low altitude flight

    Science.gov (United States)

    Sridhar, Banavar; Suorsa, Ray; Smith, Philip

    1991-01-01

    An overview of research in obstacle detection at NASA Ames Research Center is presented. The research applies techniques from computer vision to automation of rotorcraft navigation. The development of a methodology for detecting the range to obstacles based on the maximum utilization of passive sensors is emphasized. The development of a flight and image data base for verification of vision-based algorithms, and a passive ranging methodology tailored to the needs of helicopter flight are discussed. Preliminary results indicate that it is possible to obtain adequate range estimates except at regions close to the FOE. Closer to the FOE, the error in range increases since the magnitude of the disparity gets smaller, resulting in a low SNR.

  10. Binocular vision in amblyopia : structure, suppression and plasticity

    OpenAIRE

    Hess, Robert F; Thompson, Benjamin; Baker, Daniel Hart

    2014-01-01

    The amblyopic visual system was once considered to be structurally monocular. However, it now evident that the capacity for binocular vision is present in many observers with amblyopia. This has led to new techniques for quantifying suppression that have provided insights into the relationship between suppression and the monocular and binocular visual deficits experienced by amblyopes. Furthermore, new treatments are emerging that directly target suppressive interactions within the visual cor...

  11. Restoration of binocular vision in amblyopia.

    Science.gov (United States)

    Hess, R F; Mansouri, B; Thompson, B

    2011-09-01

    To develop a treatment for amblyopia based on re-establishing binocular vision. A novel procedure is outlined for measuring and reducing the extent to which the fixing eye suppresses the fellow amblyopic eye in adults with amblyopia. We hypothesize that suppression renders a structurally binocular system, functionally monocular. We demonstrate that strabismic amblyopes can combine information normally between their eyes under viewing conditions where suppression is reduced by presenting stimuli of different contrast to each eye. Furthermore we show that prolonged periods of binocular combination leads to a strengthening of binocular vision in strabismic amblyopes and eventual combination of binocular information under natural viewing conditions (stimuli of the same contrast in each eye). Concomitant improvement in monocular acuity of the amblyopic eye occurs with this reduction in suppression and strengthening of binocular fusion. Additionally, stereoscopic function was established in the majority of patients tested. We have implemented this approach on a headmounted device as well as on a handheld iPod. This provides the basis for a new treatment of amblyopia, one that is purely binocular and aimed at reducing suppression as a first step.

  12. Navigation studies based on the ubiquitous positioning technologies

    Science.gov (United States)

    Ye, Lei; Mi, Weijie; Wang, Defeng

    2007-11-01

    This paper summarized the nowadays positioning technologies, such as absolute positioning methods and relative positioning methods, indoor positioning and outdoor positioning, active positioning and passive positioning. Global Navigation Satellite System (GNSS) technologies were introduced as the omnipresent out-door positioning technologies, including GPS, GLONASS, Galileo and BD-1/2. After analysis of the shortcomings of GNSS, indoor positioning technologies were discussed and compared, including A-GPS, Cellular network, Infrared, Electromagnetism, Computer Vision Cognition, Embedded Pressure Sensor, Ultrasonic, RFID (Radio Frequency IDentification), Bluetooth, WLAN etc.. Then the concept and characteristics of Ubiquitous Positioning was proposed. After the ubiquitous positioning technologies contrast and selection followed by system engineering methodology, a navigation system model based on Incorporate Indoor-Outdoor Positioning Solution was proposed. And this model was simulated in the Galileo Demonstration for World Expo Shanghai project. In the conclusion, the prospects of ubiquitous positioning based navigation were shown, especially to satisfy the public location information acquiring requirement.

  13. Monocular Camera/IMU/GNSS Integration for Ground Vehicle Navigation in Challenging GNSS Environments

    Directory of Open Access Journals (Sweden)

    Dennis Akos

    2012-03-01

    Full Text Available Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF. Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations.

  14. Monocular camera/IMU/GNSS integration for ground vehicle navigation in challenging GNSS environments.

    Science.gov (United States)

    Chu, Tianxing; Guo, Ningyan; Backén, Staffan; Akos, Dennis

    2012-01-01

    Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF). Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations.

  15. Monocular Camera/IMU/GNSS Integration for Ground Vehicle Navigation in Challenging GNSS Environments

    Science.gov (United States)

    Chu, Tianxing; Guo, Ningyan; Backén, Staffan; Akos, Dennis

    2012-01-01

    Low-cost MEMS-based IMUs, video cameras and portable GNSS devices are commercially available for automotive applications and some manufacturers have already integrated such facilities into their vehicle systems. GNSS provides positioning, navigation and timing solutions to users worldwide. However, signal attenuation, reflections or blockages may give rise to positioning difficulties. As opposed to GNSS, a generic IMU, which is independent of electromagnetic wave reception, can calculate a high-bandwidth navigation solution, however the output from a self-contained IMU accumulates errors over time. In addition, video cameras also possess great potential as alternate sensors in the navigation community, particularly in challenging GNSS environments and are becoming more common as options in vehicles. Aiming at taking advantage of these existing onboard technologies for ground vehicle navigation in challenging environments, this paper develops an integrated camera/IMU/GNSS system based on the extended Kalman filter (EKF). Our proposed integration architecture is examined using a live dataset collected in an operational traffic environment. The experimental results demonstrate that the proposed integrated system provides accurate estimations and potentially outperforms the tightly coupled GNSS/IMU integration in challenging environments with sparse GNSS observations. PMID:22736999

  16. Multiple episodes of convergence in genes of the dim light vision pathway in bats.

    Directory of Open Access Journals (Sweden)

    Yong-Yi Shen

    Full Text Available The molecular basis of the evolution of phenotypic characters is very complex and is poorly understood with few examples documenting the roles of multiple genes. Considering that a single gene cannot fully explain the convergence of phenotypic characters, we choose to study the convergent evolution of rod vision in two divergent bats from a network perspective. The Old World fruit bats (Pteropodidae are non-echolocating and have binocular vision, whereas the sheath-tailed bats (Emballonuridae are echolocating and have monocular vision; however, they both have relatively large eyes and rely more on rod vision to find food and navigate in the night. We found that the genes CRX, which plays an essential role in the differentiation of photoreceptor cells, SAG, which is involved in the desensitization of the photoactivated transduction cascade, and the photoreceptor gene RH, which is directly responsible for the perception of dim light, have undergone parallel sequence evolution in two divergent lineages of bats with larger eyes (Pteropodidae and Emballonuroidea. The multiple convergent events in the network of genes essential for rod vision is a rare phenomenon that illustrates the importance of investigating pathways and networks in the evolution of the molecular basis of phenotypic convergence.

  17. Survey of computer vision technology for UVA navigation

    Science.gov (United States)

    Xie, Bo; Fan, Xiang; Li, Sijian

    2017-11-01

    Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are

  18. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    Science.gov (United States)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  19. Aerial vehicles collision avoidance using monocular vision

    Science.gov (United States)

    Balashov, Oleg; Muraviev, Vadim; Strotov, Valery

    2016-10-01

    In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

  20. Cooperative Monocular-Based SLAM for Multi-UAV Systems in GPS-Denied Environments.

    Science.gov (United States)

    Trujillo, Juan-Carlos; Munguia, Rodrigo; Guerra, Edmundo; Grau, Antoni

    2018-04-26

    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.

  1. Model-base visual navigation of a mobile robot

    International Nuclear Information System (INIS)

    Roening, J.

    1992-08-01

    The thesis considers the problems of visual guidance of a mobile robot. A visual navigation system is formalized consisting of four basic components: world modelling, navigation sensing, navigation and action. According to this formalization an experimental system is designed and realized enabling real-world navigation experiments. A priori knowledge of the world is used for global path finding, aiding scene analysis and providing feedback information to the close the control loop between planned and actual movements. Two world models were developed. The first approach was a map-based model especially designed for low-level description of indoor environments. The other was a higher level and more symbolic representation of the surroundings utilizing the spatial graph concept. Two passive vision approaches were developed to extract navigation information. With passive three- camera stereovision a sparse depth map of the scene was produced. Another approach employed a fish-eye lens to map the entire scene of the surroundings without camera scanning. The local path planning of the system is supported by three-dimensional scene interpreter providing a partial understanding of scene contents. The interpreter consists of data-driven low-level stages and a model-driven high-level stage. Experiments were carried out in a simulator and test vehicle constructed in the laboratory. The test vehicle successfully navigated indoors

  2. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    OpenAIRE

    Edmundo Guerra; Rodrigo Munguia; Yolanda Bolea; Antoni Grau

    2013-01-01

    Simultaneous Location and Mapping (SLAM) is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D) Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hyp...

  3. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study

    International Nuclear Information System (INIS)

    Suenaga, Hideyuki; Tran, Huy Hoang; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Takato, Tsuyoshi

    2015-01-01

    This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications. The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users

  4. Monocular tool control, eye dominance, and laterality in New Caledonian crows.

    Science.gov (United States)

    Martinho, Antone; Burns, Zackory T; von Bayern, Auguste M P; Kacelnik, Alex

    2014-12-15

    Tool use, though rare, is taxonomically widespread, but morphological adaptations for tool use are virtually unknown. We focus on the New Caledonian crow (NCC, Corvus moneduloides), which displays some of the most innovative tool-related behavior among nonhumans. One of their major food sources is larvae extracted from burrows with sticks held diagonally in the bill, oriented with individual, but not species-wide, laterality. Among possible behavioral and anatomical adaptations for tool use, NCCs possess unusually wide binocular visual fields (up to 60°), suggesting that extreme binocular vision may facilitate tool use. Here, we establish that during natural extractions, tool tips can only be viewed by the contralateral eye. Thus, maintaining binocular view of tool tips is unlikely to have selected for wide binocular fields; the selective factor is more likely to have been to allow each eye to see far enough across the midsagittal line to view the tool's tip monocularly. Consequently, we tested the hypothesis that tool side preference follows eye preference and found that eye dominance does predict tool laterality across individuals. This contrasts with humans' species-wide motor laterality and uncorrelated motor-visual laterality, possibly because bill-held tools are viewed monocularly and move in concert with eyes, whereas hand-held tools are visible to both eyes and allow independent combinations of eye preference and handedness. This difference may affect other models of coordination between vision and mechanical control, not necessarily involving tools. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Mobile Robot Navigation

    DEFF Research Database (Denmark)

    Andersen, Jens Christian

    2007-01-01

    the current position to a desired destination. This thesis presents and experimentally validates solutions for road classification, obstacle avoidance and mission execution. The road classification is based on laser scanner measurements and supported at longer ranges by vision. The road classification...... is sufficiently sensitive to separate the road from flat roadsides, and to distinguish asphalt roads from gravelled roads. The vision-based road detection uses a combination of chromaticity and edge detection to outline the traversable part of the road based on a laser scanner classified sample area....... The perception of these two sensors are utilised by a path planner to allow a number of drive modes, and especially the ability to follow road edges are investigated. The navigation mission is controlled by a script language. The navigation script controls route sequencing, junction detection, junction crossing...

  6. Monocular Perceptual Deprivation from Interocular Suppression Temporarily Imbalances Ocular Dominance.

    Science.gov (United States)

    Kim, Hyun-Woong; Kim, Chai-Youn; Blake, Randolph

    2017-03-20

    Early visual experience sculpts neural mechanisms that regulate the balance of influence exerted by the two eyes on cortical mechanisms underlying binocular vision [1, 2], and experience's impact on this neural balancing act continues into adulthood [3-5]. One recently described, compelling example of adult neural plasticity is the effect of patching one eye for a relatively short period of time: contrary to intuition, monocular visual deprivation actually improves the deprived eye's competitive advantage during a subsequent period of binocular rivalry [6-8], the robust form of visual competition prompted by dissimilar stimulation of the two eyes [9, 10]. Neural concomitants of this improvement in monocular dominance are reflected in measurements of brain responsiveness following eye patching [11, 12]. Here we report that patching an eye is unnecessary for producing this paradoxical deprivation effect: interocular suppression of an ordinarily visible stimulus being viewed by one eye is sufficient to produce shifts in subsequent predominance of that eye to an extent comparable to that produced by patching the eye. Moreover, this imbalance in eye dominance can also be induced by prior, extended viewing of two monocular images differing only in contrast. Regardless of how shifts in eye dominance are induced, the effect decays once the two eyes view stimuli equal in strength. These novel findings implicate the operation of interocular neural gain control that dynamically adjusts the relative balance of activity between the two eyes [13, 14]. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Functional vision loss: a diagnosis of exclusion.

    Science.gov (United States)

    Villegas, Rex B; Ilsen, Pauline F

    2007-10-01

    Most cases of visual acuity or visual field loss can be attributed to ocular pathology or ocular manifestations of systemic pathology. They can also occasionally be attributed to nonpathologic processes or malingering. Functional vision loss is any decrease in vision the origin of which cannot be attributed to a pathologic or structural abnormality. Two cases of functional vision loss are described. In the first, a 58-year-old man presented for a baseline eye examination for enrollment in a vision rehabilitation program. He reported bilateral blindness since a motor vehicle accident with head trauma 4 years prior. Entering visual acuity was "no light perception" in each eye. Ocular health examination was normal and the patient made frequent eye contact with the examiners. He was referred for neuroimaging and electrophysiologic testing. The second case was a 49-year-old man who presented with a long history of intermittent monocular diplopia. His medical history was significant for psycho-medical evaluations and a diagnosis of factitious disorder. Entering uncorrected visual acuities were 20/20 in each eye, but visual field testing found constriction. No abnormalities were found that could account for the monocular diplopia or visual field deficit. A diagnosis of functional vision loss secondary to factitious disorder was made. Functional vision loss is a diagnosis of exclusion. In the event of reduced vision in the context of a normal ocular health examination, all other pathology must be ruled out before making the diagnosis of functional vision loss. Evaluation must include auxiliary ophthalmologic testing, neuroimaging of the visual pathway, review of the medical history and lifestyle, and psychiatric evaluation. Comanagement with a psychiatrist is essential for patients with functional vision loss.

  8. Monocular SLAM for autonomous robots with enhanced features initialization.

    Science.gov (United States)

    Guerra, Edmundo; Munguia, Rodrigo; Grau, Antoni

    2014-04-02

    This work presents a variant approach to the monocular SLAM problem focused in exploiting the advantages of a human-robot interaction (HRI) framework. Based upon the delayed inverse-depth feature initialization SLAM (DI-D SLAM), a known monocular technique, several but crucial modifications are introduced taking advantage of data from a secondary monocular sensor, assuming that this second camera is worn by a human. The human explores an unknown environment with the robot, and when their fields of view coincide, the cameras are considered a pseudo-calibrated stereo rig to produce estimations for depth through parallax. These depth estimations are used to solve a related problem with DI-D monocular SLAM, namely, the requirement of a metric scale initialization through known artificial landmarks. The same process is used to improve the performance of the technique when introducing new landmarks into the map. The convenience of the approach taken to the stereo estimation, based on SURF features matching, is discussed. Experimental validation is provided through results from real data with results showing the improvements in terms of more features correctly initialized, with reduced uncertainty, thus reducing scale and orientation drift. Additional discussion in terms of how a real-time implementation could take advantage of this approach is provided.

  9. SLAM-based dense surface reconstruction in monocular Minimally Invasive Surgery and its application to Augmented Reality.

    Science.gov (United States)

    Chen, Long; Tang, Wen; John, Nigel W; Wan, Tao Ruan; Zhang, Jian Jun

    2018-05-01

    based on a robust 3D calibration. We demonstrate the clinical relevance of our proposed system through two examples: (a) measurement of the surface; (b) depth cues in monocular endoscopy. The performance and accuracy evaluations of the proposed framework consist of two steps. First, we have created a computer-generated endoscopy simulation video to quantify the accuracy of the camera tracking by comparing the results of the video camera tracking with the recorded ground-truth camera trajectories. The accuracy of the surface reconstruction is assessed by evaluating the Root Mean Square Distance (RMSD) of surface vertices of the reconstructed mesh with that of the ground truth 3D models. An error of 1.24 mm for the camera trajectories has been obtained and the RMSD for surface reconstruction is 2.54 mm, which compare favourably with previous approaches. Second, in vivo laparoscopic videos are used to examine the quality of accurate AR based annotation and measurement, and the creation of depth cues. These results show the potential promise of our geometry-aware AR technology to be used in MIS surgical scenes. The results show that the new framework is robust and accurate in dealing with challenging situations such as the rapid endoscopy camera movements in monocular MIS scenes. Both camera tracking and surface reconstruction based on a sparse point cloud are effective and operated in real-time. This demonstrates the potential of our algorithm for accurate AR localization and depth augmentation with geometric cues and correct surface measurements in MIS with monocular endoscopes. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Recovering stereo vision by squashing virtual bugs in a virtual reality environment.

    Science.gov (United States)

    Vedamurthy, Indu; Knill, David C; Huang, Samuel J; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne; Levi, Dennis M

    2016-06-19

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'. © 2016 The Author(s).

  11. Recovering stereo vision by squashing virtual bugs in a virtual reality environment

    Science.gov (United States)

    Vedamurthy, Indu; Knill, David C.; Huang, Samuel J.; Yung, Amanda; Ding, Jian; Kwon, Oh-Sang; Bavelier, Daphne

    2016-01-01

    Stereopsis is the rich impression of three-dimensionality, based on binocular disparity—the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task—a ‘bug squashing’ game—in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information. This article is part of the themed issue ‘Vision in our three-dimensional world’. PMID:27269607

  12. 3D display system using monocular multiview displays

    Science.gov (United States)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2002-05-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have researched the virtual-reality systems connected with computer networks for real-time remote control and developed a low-priced real-time 3D display for building these systems. We developed a 3D HMD system using monocular multi-view displays. The 3D displaying technique of this monocular multi-view display is based on the concept of the super multi-view proposed by Kajiki at TAO (Telecommunications Advancement Organization of Japan) in 1996. Our 3D HMD has two monocular multi-view displays (used as a visual display unit) in order to display a picture to the left eye and the right eye. The left and right images are a pair of stereoscopic images for the left and right eyes, then stereoscopic 3D images are observed.

  13. VEP-based acuity assessment in low vision.

    Science.gov (United States)

    Hoffmann, Michael B; Brands, Jan; Behrens-Baumann, Wolfgang; Bach, Michael

    2017-12-01

    Objective assessment of visual acuity (VA) is possible with VEP methodology, but established with sufficient precision only for vision better than about 1.0 logMAR. We here explore whether this can be extended down to 2.0 logMAR, highly desirable for low-vision evaluations. Based on the stepwise sweep algorithm (Bach et al. in Br J Ophthalmol 92:396-403, 2008) VEPs to monocular steady-state brief onset pattern stimulation (7.5-Hz checkerboards, 40% contrast, 40 ms on, 93 ms off) were recorded for eight different check sizes, from 0.5° to 9.0°, for two runs with three occipital electrodes in a Laplace-approximating montage. We examined 22 visually normal participants where acuity was reduced to ≈ 2.0 logMAR with frosted transparencies. With the established heuristic algorithm the "VEP acuity" was extracted and compared to psychophysical VA, both obtained at 57 cm distance. In 20 of the 22 participants with artificially reduced acuity the automatic analysis indicated a valid result (1.80 logMAR on average) in at least one of the two runs. 95% test-retest limits of agreement on average were ± 0.09 logMAR for psychophysical, and ± 0.21 logMAR for VEP-derived acuity. For 15 participants we obtained results in both runs and averaged them. In 12 of these 15 the low-acuity results stayed within the 95% confidence interval (± 0.3 logMAR) as established by Bach et al. (2008). The fully automated analysis yielded good agreement of psychophysical and electrophysiological VAs in 12 of 15 cases (80%) in the low-vision range down to 2.0 logMAR. This encourages us to further pursue this methodology and assess its value in patients.

  14. VISION-AIDED CONTEXT-AWARE FRAMEWORK FOR PERSONAL NAVIGATION SERVICES

    Directory of Open Access Journals (Sweden)

    S. Saeedi

    2012-07-01

    Full Text Available The ubiquity of mobile devices (such as smartphones and tablet-PCs has encouraged the use of location-based services (LBS that are relevant to the current location and context of a mobile user. The main challenge of LBS is to find a pervasive and accurate personal navigation system (PNS in different situations of a mobile user. In this paper, we propose a method of personal navigation for pedestrians that allows a user to freely move in outdoor environments. This system aims at detection of the context information which is useful for improving personal navigation. The context information for a PNS consists of user activity modes (e.g. walking, stationary, driving, and etc. and the mobile device orientation and placement with respect to the user. After detecting the context information, a low-cost integrated positioning algorithm has been employed to estimate pedestrian navigation parameters. The method is based on the integration of the relative user’s motion (changes of velocity and heading angle estimation based on the video image matching and absolute position information provided by GPS. A Kalman filter (KF has been used to improve the navigation solution when the user is walking and the phone is in his/her hand. The Experimental results demonstrate the capabilities of this method for outdoor personal navigation systems.

  15. Quad Rotorcraft Control Vision-Based Hovering and Navigation

    CERN Document Server

    García Carrillo, Luis Rodolfo; Lozano, Rogelio; Pégard, Claude

    2013-01-01

    Quad-Rotor Control develops original control methods for the navigation and hovering flight of an autonomous mini-quad-rotor robotic helicopter. These methods use an imaging system and a combination of inertial and altitude sensors to localize and guide the movement of the unmanned aerial vehicle relative to its immediate environment. The history, classification and applications of UAVs are introduced, followed by a description of modelling techniques for quad-rotors and the experimental platform itself. A control strategy for the improvement of attitude stabilization in quad-rotors is then proposed and tested in real-time experiments. The strategy, based on the use of low-cost components and with experimentally-established robustness, avoids drift in the UAV’s angular position by the addition of an internal control loop to each electronic speed controller ensuring that, during hovering flight, all four motors turn at almost the same speed. The quad-rotor’s Euler angles being very close to the origin, oth...

  16. Prevalence of color vision deficiency among arc welders.

    Science.gov (United States)

    Heydarian, Samira; Mahjoob, Monireh; Gholami, Ahmad; Veysi, Sajjad; Mohammadi, Morteza

    This study was performed to investigate whether occupationally related color vision deficiency can occur from welding. A total of 50 male welders, who had been working as welders for at least 4 years, were randomly selected as case group, and 50 age matched non-welder men, who lived in the same area, were regarded as control group. Color vision was assessed using the Lanthony desatured panel D-15 test. The test was performed under the daylight fluorescent lamp with a spectral distribution of energy with a color temperature of 6500K and a color rendering index of 94 that provided 1000lx on the work plane. The test was carried out monocularly and no time limit was imposed. All data analysis were performed using SPSS, version 22. The prevalence of dyschromatopsia among welders was 15% which was statistically higher than that of nonwelder group (2%) (p=0.001). Among welders with dyschromatopsia, color vision deficiency in 72.7% of cases was monocular. There was positive relationship between the employment length and color vision loss (p=0.04). Similarly, a significant correlation was found between the prevalence of color vision deficiency and average working hours of welding a day (p=0.025). Chronic exposure to welding light may cause color vision deficiency. The damage depends on the exposure duration and the length of their employment as welders. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  17. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.

    Science.gov (United States)

    Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao

    2017-06-12

    Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  18. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images

    Directory of Open Access Journals (Sweden)

    Lingyan Ran

    2017-06-01

    Full Text Available Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN, trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  19. Stereo-vision-based terrain mapping for off-road autonomous navigation

    Science.gov (United States)

    Rankin, Arturo L.; Huertas, Andres; Matthies, Larry H.

    2009-05-01

    Successful off-road autonomous navigation by an unmanned ground vehicle (UGV) requires reliable perception and representation of natural terrain. While perception algorithms are used to detect driving hazards, terrain mapping algorithms are used to represent the detected hazards in a world model a UGV can use to plan safe paths. There are two primary ways to detect driving hazards with perception sensors mounted to a UGV: binary obstacle detection and traversability cost analysis. Binary obstacle detectors label terrain as either traversable or non-traversable, whereas, traversability cost analysis assigns a cost to driving over a discrete patch of terrain. In uncluttered environments where the non-obstacle terrain is equally traversable, binary obstacle detection is sufficient. However, in cluttered environments, some form of traversability cost analysis is necessary. The Jet Propulsion Laboratory (JPL) has explored both approaches using stereo vision systems. A set of binary detectors has been implemented that detect positive obstacles, negative obstacles, tree trunks, tree lines, excessive slope, low overhangs, and water bodies. A compact terrain map is built from each frame of stereo images. The mapping algorithm labels cells that contain obstacles as nogo regions, and encodes terrain elevation, terrain classification, terrain roughness, traversability cost, and a confidence value. The single frame maps are merged into a world map where temporal filtering is applied. In previous papers, we have described our perception algorithms that perform binary obstacle detection. In this paper, we summarize the terrain mapping capabilities that JPL has implemented during several UGV programs over the last decade and discuss some challenges to building terrain maps with stereo range data.

  20. A Highest Order Hypothesis Compatibility Test for Monocular SLAM

    Directory of Open Access Journals (Sweden)

    Edmundo Guerra

    2013-08-01

    Full Text Available Simultaneous Location and Mapping (SLAM is a key problem to solve in order to build truly autonomous mobile robots. SLAM with a unique camera, or monocular SLAM, is probably one of the most complex SLAM variants, based entirely on a bearing-only sensor working over six DOF. The monocular SLAM method developed in this work is based on the Delayed Inverse-Depth (DI-D Feature Initialization, with the contribution of a new data association batch validation technique, the Highest Order Hypothesis Compatibility Test, HOHCT. The Delayed Inverse-Depth technique is used to initialize new features in the system and defines a single hypothesis for the initial depth of features with the use of a stochastic technique of triangulation. The introduced HOHCT method is based on the evaluation of statistically compatible hypotheses and a search algorithm designed to exploit the strengths of the Delayed Inverse-Depth technique to achieve good performance results. This work presents the HOHCT with a detailed formulation of the monocular DI-D SLAM problem. The performance of the proposed HOHCT is validated with experimental results, in both indoor and outdoor environments, while its costs are compared with other popular approaches.

  1. 77 FR 75494 - Qualification of Drivers; Exemption Applications; Vision

    Science.gov (United States)

    2012-12-20

    ... Multiple Regression Analysis of a Poisson Process,'' Journal of American Statistical Association, June 1971... 14 applicants' case histories. The 14 individuals applied for exemptions from the vision requirement... apply the principle to monocular drivers, because data from the Federal Highway Administration's (FHWA...

  2. Preliminary Results for a Monocular Marker-Free Gait Measurement System

    Directory of Open Access Journals (Sweden)

    Jane Courtney

    2006-01-01

    Full Text Available This paper presents results from a novel monocular marker-free gait measurement system. The system was designed for physical and occupational therapists to monitor the progress of patients through therapy. It is based on a novel human motion capturemethod derived from model-based tracking. Testing is performed on two monocular, sagittal-view, sample gait videos – one with both the environment and the subject’s appearance and movement restricted and one in a natural environment with unrestrictedclothing and motion. Results of the modelling, tracking and analysis stages are presented along with standard gait graphs and parameters.

  3. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    Science.gov (United States)

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  4. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    Directory of Open Access Journals (Sweden)

    Darío Maravall

    2017-08-01

    Full Text Available We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV in typical indoor navigation tasks.

  5. Towards a Sign-Based Indoor Navigation System for People with Visual Impairments.

    Science.gov (United States)

    Rituerto, Alejandro; Fusco, Giovanni; Coughlan, James M

    2016-10-01

    Navigation is a challenging task for many travelers with visual impairments. While a variety of GPS-enabled tools can provide wayfinding assistance in outdoor settings, GPS provides no useful localization information indoors. A variety of indoor navigation tools are being developed, but most of them require potentially costly physical infrastructure to be installed and maintained, or else the creation of detailed visual models of the environment. We report development of a new smartphone-based navigation aid, which combines inertial sensing, computer vision and floor plan information to estimate the user's location with no additional physical infrastructure and requiring only the locations of signs relative to the floor plan. A formative study was conducted with three blind volunteer participants demonstrating the feasibility of the approach and highlighting the areas needing improvement.

  6. Monocular channels have a functional role in endogenous orienting.

    Science.gov (United States)

    Saban, William; Sekely, Liora; Klein, Raymond M; Gabay, Shai

    2018-03-01

    The literature has long emphasized the role of higher cortical structures in endogenous orienting. Based on evolutionary explanation and previous data, we explored the possibility that lower monocular channels may also have a functional role in endogenous orienting of attention. Sensitive behavioral manipulation was used to probe the contribution of monocularly segregated regions in a simple cue - target detection task. A central spatially informative cue, and its ensuing target, were presented to the same or different eyes at varying cue-target intervals. Results indicated that the onset of endogenous orienting was apparent earlier when the cue and target were presented to the same eye. The data provides converging evidence for the notion that endogenous facilitation is modulated by monocular portions of the visual stream. This, in turn, suggests that higher cortical mechanisms are not exclusively responsible for endogenous orienting, and that a dynamic interaction between higher and lower neural levels, might be involved. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Long-Term Visual Training Increases Visual Acuity and Long-Term Monocular Deprivation Promotes Ocular Dominance Plasticity in Adult Standard Cage-Raised Mice.

    Science.gov (United States)

    Hosang, Leon; Yusifov, Rashad; Löwel, Siegrid

    2018-01-01

    For routine behavioral tasks, mice predominantly rely on olfactory cues and tactile information. In contrast, their visual capabilities appear rather restricted, raising the question whether they can improve if vision gets more behaviorally relevant. We therefore performed long-term training using the visual water task (VWT): adult standard cage (SC)-raised mice were trained to swim toward a rewarded grating stimulus so that using visual information avoided excessive swimming toward nonrewarded stimuli. Indeed, and in contrast to old mice raised in a generally enriched environment (Greifzu et al., 2016), long-term VWT training increased visual acuity (VA) on average by more than 30% to 0.82 cycles per degree (cyc/deg). In an individual animal, VA even increased to 1.49 cyc/deg, i.e., beyond the rat range of VAs. Since visual experience enhances the spatial frequency threshold of the optomotor (OPT) reflex of the open eye after monocular deprivation (MD), we also quantified monocular vision after VWT training. Monocular VA did not increase reliably, and eye reopening did not initiate a decline to pre-MD values as observed by optomotry; VA values rather increased by continued VWT training. Thus, optomotry and VWT measure different parameters of mouse spatial vision. Finally, we tested whether long-term MD induced ocular dominance (OD) plasticity in the visual cortex of adult [postnatal day (P)162-P182] SC-raised mice. This was indeed the case: 40-50 days of MD induced OD shifts toward the open eye in both VWT-trained and, surprisingly, also in age-matched mice without VWT training. These data indicate that (1) long-term VWT training increases adult mouse VA, and (2) long-term MD induces OD shifts also in adult SC-raised mice.

  8. Improvements in clinical and functional vision and perceived visual disability after first and second eye cataract surgery

    OpenAIRE

    Elliott, D.; Patla, A.; Bullimore, M.

    1997-01-01

    AIMS—To determine the improvements in clinical and functional vision and perceived visual disability after first and second eye cataract surgery.
METHODS—Clinical vision (monocular and binocular high and low contrast visual acuity, contrast sensitivity, and disability glare), functional vision (face identity and expression recognition, reading speed, word acuity, and mobility orientation), and perceived visual disability (Activities of Daily Vision Scale) were measured in 25 subjects before a...

  9. Effect of Vision Therapy on Accommodation in Myopic Chinese Children

    Directory of Open Access Journals (Sweden)

    Martin Ming-Leung Ma

    2016-01-01

    Full Text Available Introduction. We evaluated the effectiveness of office-based accommodative/vergence therapy (OBAVT with home reinforcement to improve accommodative function in myopic children with poor accommodative response. Methods. This was a prospective unmasked pilot study. 14 Chinese myopic children aged 8 to 12 years with at least 1 D of lag of accommodation were enrolled. All subjects received 12 weeks of 60-minute office-based accommodative/vergence therapy (OBAVT with home reinforcement. Primary outcome measure was the change in monocular lag of accommodation from baseline visit to 12-week visit measured by Shinnipon open-field autorefractor. Secondary outcome measures were the changes in accommodative amplitude and monocular accommodative facility. Results. All participants completed the study. The lag of accommodation at baseline visit was 1.29 ± 0.21 D and it was reduced to 0.84 ± 0.19 D at 12-week visit. This difference (−0.46 ± 0.22 D; 95% confidence interval: −0.33 to −0.58 D is statistically significant (p<0.0001. OBAVT also increased the amplitude and facility by 3.66 ± 3.36 D (p=0.0013; 95% confidence interval: 1.72 to 5.60 D and 10.9 ± 4.8 cpm (p<0.0001; 95% confidence interval: 8.1 to 13.6 cpm, respectively. Conclusion. Standardized 12 weeks of OBAVT with home reinforcement is able to significantly reduce monocular lag of accommodation and increase monocular accommodative amplitude and facility. A randomized clinical trial designed to investigate the effect of vision therapy on myopia progression is warranted.

  10. Parsimonious Ways to Use Vision for Navigation

    Directory of Open Access Journals (Sweden)

    Paul Graham

    2012-05-01

    Full Text Available The use of visual information for navigation appears to be a universal strategy for sighted animals, amongst which, one particular group of expert navigators are the ants. The broad interest in studies of ant navigation is in part due to their small brains, thus biomimetic engineers expect to be impressed by elegant control solutions, and psychologists might hope for a description of the minimal cognitive requirements for complex spatial behaviours. In this spirit, we have been taking an interdisciplinary approach to the visual guided navigation of ants in their natural habitat. Behavioural experiments and natural image statistics show that visual navigation need not depend on the remembering or recognition of objects. Further modelling work suggests how simple behavioural routines might enable navigation using familiarity detection rather than explicit recall, and we present a proof of concept that visual navigation using familiarity can be achieved without specifying when or what to learn, nor separating routes into sequences of waypoints. We suggest that our current model represents the only detailed and complete model of insect route guidance to date. What's more, we believe the suggested mechanisms represent useful parsimonious hypotheses for the visually guided navigation in larger-brain animals.

  11. Bilateral symmetry in vision and influence of ocular surgical procedures on binocular vision: A topical review

    Directory of Open Access Journals (Sweden)

    Samuel Arba Mosquera

    2016-10-01

    Full Text Available We analyze the role of bilateral symmetry in enhancing binocular visual ability in human eyes, and further explore how efficiently bilateral symmetry is preserved in different ocular surgical procedures. The inclusion criterion for this review was strict relevance to the clinical questions under research. Enantiomorphism has been reported in lower order aberrations, higher order aberrations and cone directionality. When contrast differs in the two eyes, binocular acuity is better than monocular acuity of the eye that receives higher contrast. Anisometropia has an uncommon occurrence in large populations. Anisometropia seen in infancy and childhood is transitory and of little consequence for the visual acuity. Binocular summation of contrast signals declines with age, independent of inter-ocular differences. The symmetric associations between the right and left eye could be explained by the symmetry in pupil offset and visual axis which is always nasal in both eyes. Binocular summation mitigates poor visual performance under low luminance conditions and strong inter-ocular disparity detrimentally affects binocular summation. Considerable symmetry of response exists in fellow eyes of patients undergoing myopic PRK and LASIK, however the method to determine whether or not symmetry is maintained consist of comparing individual terms in a variety of ad hoc ways both before and after the refractive surgery, ignoring the fact that retinal image quality for any individual is based on the sum of all terms. The analysis of bilateral symmetry should be related to the patients’ binocular vision status. The role of aberrations in monocular and binocular vision needs further investigation.

  12. Mapping, Navigation, and Learning for Off-Road Traversal

    DEFF Research Database (Denmark)

    Konolige, Kurt; Agrawal, Motilal; Blas, Morten Rufus

    2009-01-01

    The challenge in the DARPA Learning Applied to Ground Robots (LAGR) project is to autonomously navigate a small robot using stereo vision as the main sensor. During this project, we demonstrated a complete autonomous system for off-road navigation in unstructured environments, using stereo vision......, online terrain traversability learning, visual odometry, map registration, planning, and control. At the end of 3 years, the system we developed outperformed all nine other teams in final blind tests over previously unseen terrain.......The challenge in the DARPA Learning Applied to Ground Robots (LAGR) project is to autonomously navigate a small robot using stereo vision as the main sensor. During this project, we demonstrated a complete autonomous system for off-road navigation in unstructured environments, using stereo vision...

  13. EVALUATION OF SIFT AND SURF FOR VISION BASED LOCALIZATION

    Directory of Open Access Journals (Sweden)

    X. Qu

    2016-06-01

    Full Text Available Vision based localization is widely investigated for the autonomous navigation and robotics. One of the basic steps of vision based localization is the extraction of interest points in images that are captured by the embedded camera. In this paper, SIFT and SURF extractors were chosen to evaluate their performance in localization. Four street view image sequences captured by a mobile mapping system, were used for the evaluation and both SIFT and SURF were tested on different image scales. Besides, the impact of the interest point distribution was also studied. We evaluated the performances from for aspects: repeatability, precision, accuracy and runtime. The local bundle adjustment method was applied to refine the pose parameters and the 3D coordinates of tie points. According to the results of our experiments, SIFT was more reliable than SURF. Apart from this, both the accuracy and the efficiency of localization can be improved if the distribution of feature points are well constrained for SIFT.

  14. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  15. Laser range finder model for autonomous navigation of a robot in a maize field using a particle filter

    NARCIS (Netherlands)

    Hiremath, S.A.; Heijden, van der G.W.A.M.; Evert, van F.K.; Stein, A.; Braak, ter C.J.F.

    2014-01-01

    Autonomous navigation of robots in an agricultural environment is a difficult task due to the inherent uncertainty in the environment. Many existing agricultural robots use computer vision and other sensors to supplement Global Positioning System (GPS) data when navigating. Vision based methods are

  16. Correlations of memory and learning with vision in aged patients before and after a cataract operation.

    Science.gov (United States)

    Fagerström, R

    1992-12-01

    The connection between memory and learning with vision was investigated by studying 100 cataract operation patients, aged 71 to 76 years, 25 of them being men and 75 women. The cataract operation restored sufficient acuity of vision for reading (minimum E-test value 0.40) to 79% of the subjects. Short-term memory was studied with series of numbers, homogenic and heterogenic inhibition, and long sentences. Learning was tested with paired-associate learning and word learning. Psychological symptoms were measured on the Brief Psychiatric Rating Scale and personality on the Mini-Mult MMPI. Memory and learning improved significantly when vision was normalized after the cataract operation. Poor memory and learning scores correlated with monocular vision before the operation and with defects in the field of vision, due to glaucoma and exceeding 20%, postsurgery. Monocular vision and defects in the visual field caused a continuous sense of abnormalness, which impaired old people's ability to concentrate on tasks of memory and learning. Cerebrovascular disturbances, beginning dementia, and moderate psychological symptoms obstructed memory and learning on both test rounds. Depression was the most important psychological symptom contributing to poor memory and learning scores after the cataract operation. The memory and learning defects mainly reflected disturbances in memorizing.

  17. Ergonomic evaluation of ubiquitous computing with monocular head-mounted display

    Science.gov (United States)

    Kawai, Takashi; Häkkinen, Jukka; Yamazoe, Takashi; Saito, Hiroko; Kishi, Shinsuke; Morikawa, Hiroyuki; Mustonen, Terhi; Kaistinen, Jyrki; Nyman, Göte

    2010-01-01

    In this paper, the authors conducted an experiment to evaluate the UX in an actual outdoor environment, assuming the casual use of monocular HMD to view video content while short walking. In conducting the experiment, eight subjects were asked to view news videos on a monocular HMD while walking through a large shopping mall. Two types of monocular HMDs and a hand-held media player were used, and the psycho-physiological responses of the subjects were measured before, during, and after the experiment. The VSQ, SSQ and NASA-TLX were used to assess the subjective workloads and symptoms. The objective indexes were heart rate and stride and a video recording of the environment in front of the subject's face. The results revealed differences between the two types of monocular HMDs as well as between the monocular HMDs and other conditions. Differences between the types of monocular HMDs may have been due to screen vibration during walking, and it was considered as a major factor in the UX in terms of the workload. Future experiments to be conducted in other locations will have higher cognitive loads in order to study the performance and the situation awareness to actual and media environments.

  18. Recovery of neurofilament following early monocular deprivation

    Directory of Open Access Journals (Sweden)

    Timothy P O'Leary

    2012-04-01

    Full Text Available A brief period of monocular deprivation in early postnatal life can alter the structure of neurons within deprived-eye-receiving layers of the dorsal lateral geniculate nucleus. The modification of structure is accompanied by a marked reduction in labeling for neurofilament, a protein that composes the stable cytoskeleton and that supports neuron structure. This study examined the extent of neurofilament recovery in monocularly deprived cats that either had their deprived eye opened (binocular recovery, or had the deprivation reversed to the fellow eye (reverse occlusion. The degree to which recovery was dependent on visually-driven activity was examined by placing monocularly deprived animals in complete darkness (dark rearing. The loss of neurofilament and the reduction of soma size caused by monocular deprivation were both ameliorated equally following either binocular recovery or reverse occlusion for 8 days. Though monocularly deprived animals placed in complete darkness showed recovery of soma size, there was a generalized loss of neurofilament labeling that extended to originally non-deprived layers. Overall, these results indicate that recovery of soma size is achieved by removal of the competitive disadvantage of the deprived eye, and occurred even in the absence of visually-driven activity. Recovery of neurofilament occurred when the competitive disadvantage of the deprived eye was removed, but unlike the recovery of soma size, was dependent upon visually-driven activity. The role of neurofilament in providing stable neural structure raises the intriguing possibility that dark rearing, which reduced overall neurofilament levels, could be used to reset the deprived visual system so as to make it more ameliorable with treatment by experiential manipulations.

  19. A 10-gram Microflyer for Vision-based Indoor Navigation

    OpenAIRE

    Zufferey, Jean-Christophe; Klaptocz, Adam; Beyeler, Antoine; Nicoud, Jean-Daniel; Floreano, Dario

    2006-01-01

    We aim at developing ultralight autonomous microflyers capable of navigating within houses or small built environments. Our latest prototype is a fixed-wing aircraft weighing a mere 10 g, flying around 1.5 m/s and carrying the necessary electronics for airspeed regulation and collision avoidance. This microflyer is equipped with two tiny camera modules, two rate gyroscopes, an anemometer, a small microcontroller, and a Bluetooth radio module. In-flight tests are carried out ...

  20. The prevalence of vision loss due to ocular trauma in the Australian National Eye Health Survey.

    Science.gov (United States)

    Keel, Stuart; Xie, Jing; Foreman, Joshua; Taylor, Hugh R; Dirani, Mohamed

    2017-11-01

    To determine the prevalence of vision loss due to ocular trauma in Australia. The National Eye Health Survey (NEHS) is a population-based cross-sectional study that examined 3098 non-Indigenous Australians (aged 50-98 years) and 1738 Indigenous Australians (aged 40-92 years) living in 30 randomly selected sites, stratified by remoteness. An eye was considered to have vision loss due to trauma if the best-corrected visual acuity was worse than 6/12 and the main cause was attributed to ocular trauma. This determination was made by two independent ophthalmologists and any disagreements were adjudicated by a third senior ophthalmologist. The sampling weight adjusted prevalence of vision loss due to ocular trauma in non-Indigenous Australians aged 50 years and older and Indigenous Australians aged 40 years and over was 0.24% (95%CI: 0.10, 0.52) and 0.79% (95%CI: 0.56, 1.13), respectively. Trauma was attributed as an underlying cause of bilateral vision loss in one Indigenous participant, with all other cases being monocular. Males displayed a higher prevalence of vision loss from ocular trauma than females in both the non-Indigenous (0.47% vs. 1.25%, p=0.03) and Indigenous populations (0.12% vs. 0.38%, p=0.02). After multivariate adjustments, residing in Very Remote geographical areas was associated with higher odds of vision loss from ocular trauma. We estimate that 2.4 per 1000 non-Indigenous and 7.9 per 1000 Indigenous Australian adults have monocular vision loss due to a previous severe ocular trauma. Our findings indicate that males, Indigenous Australians and those residing in Very Remote communities may benefit from targeted health promotion to improve awareness of trauma prevention strategies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Autonomous Navigation in GNSS-Denied Environments, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Aurora proposes to transition UMD methods for insect-inspired, lightweight vision- and optical sensor-based navigation methods for a combined air-ground system that...

  2. Operational Based Vision Assessment Automated Vision Test Collection User Guide

    Science.gov (United States)

    2017-05-15

    AFRL-SA-WP-SR-2017-0012 Operational Based Vision Assessment Automated Vision Test Collection User Guide Elizabeth Shoda, Alex...June 2015 – May 2017 4. TITLE AND SUBTITLE Operational Based Vision Assessment Automated Vision Test Collection User Guide 5a. CONTRACT NUMBER... automated vision tests , or AVT. Development of the AVT was required to support threshold-level vision testing capability needed to investigate the

  3. Real-Time 3D Motion capture by monocular vision and virtual rendering

    OpenAIRE

    Gomez Jauregui , David Antonio; Horain , Patrick

    2012-01-01

    International audience; Avatars in networked 3D virtual environments allow users to interact over the Internet and to get some feeling of virtual telepresence. However, avatar control may be tedious. Motion capture systems based on 3D sensors have recently reached the consumer market, but webcams and camera-phones are more widespread and cheaper. The proposed demonstration aims at animating a user's avatar from real time 3D motion capture by monoscopic computer vision, thus allowing virtual t...

  4. Implementation Of Vision-Based Landing Target Detection For VTOL UAV Using Raspberry Pi

    Directory of Open Access Journals (Sweden)

    Ei Ei Nyein

    2017-04-01

    Full Text Available This paper presents development and implementation of a real-time vision-based landing system for VTOL UAV. We use vision for precise target detection and recognition. A UAV is equipped with the onboard raspberry pi camera to take images and raspberry pi platform to operate the image processing techniques. Today image processing is used for various applications in this paper it is used for landing target extraction. And vision system is also used for take-off and landing function in VTOL UAV. Our landing target design is used as the helipad H shape. Firstly the image is captured to detect the target by the onboard camera. Next the capture image is operated in the onboard processor. Finally the alert sound signal is sent to the remote control RC for landing VTOL UAV. The information obtained from vision system is used to navigate a safe landing. The experimental results from real tests are presented.

  5. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles

    Directory of Open Access Journals (Sweden)

    Miguel Angel Olivares-Mendez

    2016-03-01

    Full Text Available Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.

  6. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles.

    Science.gov (United States)

    Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger

    2016-03-11

    Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption.

  7. Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles

    Science.gov (United States)

    Olivares-Mendez, Miguel Angel; Sanchez-Lopez, Jose Luis; Jimenez, Felipe; Campoy, Pascual; Sajadi-Alamdari, Seyed Amin; Voos, Holger

    2016-01-01

    Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption. PMID:26978365

  8. A novel visual-inertial monocular SLAM

    Science.gov (United States)

    Yue, Xiaofeng; Zhang, Wenjuan; Xu, Li; Liu, JiangGuo

    2018-02-01

    With the development of sensors and computer vision research community, cameras, which are accurate, compact, wellunderstood and most importantly cheap and ubiquitous today, have gradually been at the center of robot location. Simultaneous localization and mapping (SLAM) using visual features, which is a system getting motion information from image acquisition equipment and rebuild the structure in unknown environment. We provide an analysis of bioinspired flights in insects, employing a novel technique based on SLAM. Then combining visual and inertial measurements to get high accuracy and robustness. we present a novel tightly-coupled Visual-Inertial Simultaneous Localization and Mapping system which get a new attempt to address two challenges which are the initialization problem and the calibration problem. experimental results and analysis show the proposed approach has a more accurate quantitative simulation of insect navigation, which can reach the positioning accuracy of centimeter level.

  9. A low cost PSD-based monocular motion capture system

    Science.gov (United States)

    Ryu, Young Kee; Oh, Choonsuk

    2007-10-01

    This paper describes a monocular PSD-based motion capture sensor to employ with commercial video game systems such as Microsoft's XBOX and Sony's Playstation II. The system is compact, low-cost, and only requires a one-time calibration at the factory. The system includes a PSD(Position Sensitive Detector) and active infrared (IR) LED markers that are placed on the object to be tracked. The PSD sensor is placed in the focal plane of a wide-angle lens. The micro-controller calculates the 3D position of the markers using only the measured intensity and the 2D position on the PSD. A series of experiments were performed to evaluate the performance of our prototype system. From the experimental results we see that the proposed system has the advantages of the compact size, the low cost, the easy installation, and the high frame rates to be suitable for high speed motion tracking in games.

  10. vSLAM: vision-based SLAM for autonomous vehicle navigation

    Science.gov (United States)

    Goncalves, Luis; Karlsson, Niklas; Ostrowski, Jim; Di Bernardo, Enrico; Pirjanian, Paolo

    2004-09-01

    Among the numerous challenges of building autonomous/unmanned vehicles is that of reliable and autonomous localization in an unknown environment. In this paper we present a system that can efficiently and autonomously solve the robotics 'SLAM' problem, where a robot placed in an unknown environment, simultaneously must localize itself and make a map of the environment. The system is vision-based, and makes use of Evolution Robotic's powerful object recognition technology. As the robot explores the environment, it is continuously performing four tasks, using information from acquired images and the drive system odometry. The robot: (1) recognizes previously created 3-D visual landmarks; (2) builds new 3-D visual landmarks; (3) updates the current estimate of its location, using the map; (4) updates the landmark map. In indoor environments, the system can build a map of a 5m by 5m area in approximately 20 minutes, and can localize itself with an accuracy of approximately 15 cm in position and 3 degrees in orientation relative to the global reference frame of the landmark map. The same system can be adapted for outdoor, vehicular use.

  11. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    Science.gov (United States)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  12. Short-Term Monocular Deprivation Enhances Physiological Pupillary Oscillations

    Directory of Open Access Journals (Sweden)

    Paola Binda

    2017-01-01

    Full Text Available Short-term monocular deprivation alters visual perception in adult humans, increasing the dominance of the deprived eye, for example, as measured with binocular rivalry. This form of plasticity may depend upon the inhibition/excitation balance in the visual cortex. Recent work suggests that cortical excitability is reliably tracked by dilations and constrictions of the pupils of the eyes. Here, we ask whether monocular deprivation produces a systematic change of pupil behavior, as measured at rest, that is independent of the change of visual perception. During periods of minimal sensory stimulation (in the dark and task requirements (minimizing body and gaze movements, slow pupil oscillations, “hippus,” spontaneously appear. We find that hippus amplitude increases after monocular deprivation, with larger hippus changes in participants showing larger ocular dominance changes (measured by binocular rivalry. This tight correlation suggests that a single latent variable explains both the change of ocular dominance and hippus. We speculate that the neurotransmitter norepinephrine may be implicated in this phenomenon, given its important role in both plasticity and pupil control. On the practical side, our results indicate that measuring the pupil hippus (a simple and short procedure provides a sensitive index of the change of ocular dominance induced by short-term monocular deprivation, hence a proxy for plasticity.

  13. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images.

    Science.gov (United States)

    Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A

    2013-06-01

    This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.

  14. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  15. Research on three-dimensional reconstruction method based on binocular vision

    Science.gov (United States)

    Li, Jinlin; Wang, Zhihui; Wang, Minjun

    2018-03-01

    As the hot and difficult issue in computer vision, binocular stereo vision is an important form of computer vision,which has a broad application prospects in many computer vision fields,such as aerial mapping,vision navigation,motion analysis and industrial inspection etc.In this paper, a research is done into binocular stereo camera calibration, image feature extraction and stereo matching. In the binocular stereo camera calibration module, the internal parameters of a single camera are obtained by using the checkerboard lattice of zhang zhengyou the field of image feature extraction and stereo matching, adopted the SURF operator in the local feature operator and the SGBM algorithm in the global matching algorithm are used respectively, and the performance are compared. After completed the feature points matching, we can build the corresponding between matching points and the 3D object points using the camera parameters which are calibrated, which means the 3D information.

  16. Enabling Autonomous Navigation for Affordable Scooters.

    Science.gov (United States)

    Liu, Kaikai; Mulky, Rajathswaroop

    2018-06-05

    Despite the technical success of existing assistive technologies, for example, electric wheelchairs and scooters, they are still far from effective enough in helping those in need navigate to their destinations in a hassle-free manner. In this paper, we propose to improve the safety and autonomy of navigation by designing a cutting-edge autonomous scooter, thus allowing people with mobility challenges to ambulate independently and safely in possibly unfamiliar surroundings. We focus on indoor navigation scenarios for the autonomous scooter where the current location, maps, and nearby obstacles are unknown. To achieve semi-LiDAR functionality, we leverage the gyros-based pose data to compensate the laser motion in real time and create synthetic mapping of simple environments with regular shapes and deep hallways. Laser range finders are suitable for long ranges with limited resolution. Stereo vision, on the other hand, provides 3D structural data of nearby complex objects. To achieve simultaneous fine-grained resolution and long range coverage in the mapping of cluttered and complex environments, we dynamically fuse the measurements from the stereo vision camera system, the synthetic laser scanner, and the LiDAR. We propose solutions to self-correct errors in data fusion and create a hybrid map to assist the scooter in achieving collision-free navigation in an indoor environment.

  17. Unexpected course of treatment in a thirteen-year-old boy with unilateral vision disorder

    International Nuclear Information System (INIS)

    Dorobisz, A.T.; Rucinski, A.; Ujma-Czapska, B.; Grotowska, M.; Zaleska-Dorobisz, U.

    2005-01-01

    Monocular vision disorder is a characteristic symptom of brain ischemia of the second to fourth stage of Milikan's scale. Partial occipital epilepsy, where optic symptoms are usually bilateral and convulsion or automatisms appear later, is seldom the cause. In this paper the authors present a special case of a vision disorder in a 13-year-old boy in whom kinking of the internal carotid artery (ICA) was recognized in the first stage of diagnostic examination and treatment as the basic cause of the disorder. Other causes were not affirmed. Surgical angioplasty of the carotid artery was performed with the restoration of proper circulation. Despite the treatment, the symptoms persisted. In the further course of the disease, after repeated diagnostic imaging, childhood epilepsy with occipital spikes was recognized as the cause of the monocular anopsy. All the symptoms disappeared after pharmacological treatment. The surgeon must prove that the neurological symptoms are caused by kinking of the internal carotid artery and that no other etiology exists. (author)

  18. Visual navigation using edge curve matching for pinpoint planetary landing

    Science.gov (United States)

    Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei

    2018-05-01

    Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.

  19. Cognitive mapping based on synthetic vision?

    Science.gov (United States)

    Helmetag, Arnd; Halbig, Christian; Kubbat, Wolfgang; Schmidt, Rainer

    1999-07-01

    The analysis of accidents focused our work on the avoidance of 'Controlled Flight Into Terrain' caused by insufficient situation awareness. Analysis of safety concepts led us to the design of the proposed synthetic vision system that will be described. Since most information on these 3D-Displays is shown in a graphical way, it can intuitively be understood by the pilot. What are the new possibilities using SVS enhancing situation awareness? First, detection of ground collision hazard is possible by monitoring a perspective Primary Flight Display. Under the psychological point of view it is based on the perception of expanding objects in the visual flow field. Supported by a Navigation Display a local conflict resolution can be mentally worked out very fast. Secondly, it is possible to follow a 3D flight path visualized as a 'Tunnel in the sky.' This can further be improved by using a flight path prediction. These are the prerequisites for a safe and adequate movement in any kind of spatial environment. However situation awareness requires the ability of navigation and spatial problem solving. Both abilities are based on higher cognitive functions in real as well as in a synthetic environment. In this paper the current training concept will be analyzed. Advantages resulting from the integration of a SVS concerning pilot training will be discussed and necessary requirements in terrain depiction will be pinpointed. Finally a modified Computer Based Training for the familiarization with Salzburg Airport for a SVS equipped aircraft will be presented. It is developed by Darmstadt University of Technology in co-operation with Lufthansa Flight Training.

  20. On so-called paradoxical monocular stereoscopy.

    Science.gov (United States)

    Koenderink, J J; van Doorn, A J; Kappers, A M

    1994-01-01

    Human observers are apparently well able to judge properties of 'three-dimensional objects' on the basis of flat pictures such as photographs of physical objects. They obtain this 'pictorial relief' without much conscious effort and with little interference from the (flat) picture surface. Methods for 'magnifying' pictorial relief from single pictures include viewing instructions as well as a variety of monocular and binocular 'viewboxes'. Such devices are reputed to yield highly increased pictorial depth, though no methodologies for the objective verification of such claims exist. A binocular viewbox has been reconstructed and pictorial relief under monocular, 'synoptic', and natural binocular viewing is described. The results corroborate and go beyond early introspective reports and turn out to pose intriguing problems for modern research.

  1. Insect-Based Vision for Autonomous Vehicles: A Feasibility Study

    Science.gov (United States)

    Srinivasan, Mandyam V.

    1999-01-01

    The aims of the project were to use a high-speed digital video camera to pursue two questions: (1) To explore the influence of temporal imaging constraints on the performance of vision systems for autonomous mobile robots; (2) To study the fine structure of insect flight trajectories in order to better understand the characteristics of flight control, orientation and navigation.

  2. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

    OpenAIRE

    Mur-Artal, Raul; Tardos, Juan D.

    2016-01-01

    We present ORB-SLAM2 a complete SLAM system for monocular, stereo and RGB-D cameras, including map reuse, loop closing and relocalization capabilities. The system works in real-time on standard CPUs in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end based on bundle adjustment with monocular and stereo observations allows for accurate trajectory estimation with metric scale. Our syst...

  3. Dichoptic training in adults with amblyopia: Additional stereoacuity gains over monocular training.

    Science.gov (United States)

    Liu, Xiang-Yun; Zhang, Jun-Yun

    2017-08-04

    Dichoptic training is a recent focus of research on perceptual learning in adults with amblyopia, but whether and how dichoptic training is superior to traditional monocular training is unclear. Here we investigated whether dichoptic training could further boost visual acuity and stereoacuity in monocularly well-trained adult amblyopic participants. During dichoptic training the participants used the amblyopic eye to practice a contrast discrimination task, while a band-filtered noise masker was simultaneously presented in the non-amblyopic fellow eye. Dichoptic learning was indexed by the increase of maximal tolerable noise contrast for successful contrast discrimination in the amblyopic eye. The results showed that practice tripled maximal tolerable noise contrast in 13 monocularly well-trained amblyopic participants. Moreover, the training further improved stereoacuity by 27% beyond the 55% gain from previous monocular training, but unchanged visual acuity of the amblyopic eyes. Therefore our dichoptic training method may produce extra gains of stereoacuity, but not visual acuity, in adults with amblyopia after monocular training. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Computer-enhanced stereoscopic vision in a head-mounted operating binocular

    International Nuclear Information System (INIS)

    Birkfellner, Wolfgang; Figl, Michael; Matula, Christian; Hummel, Johann; Hanel, Rudolf; Imhof, Herwig; Wanschitz, Felix; Wagner, Arne; Watzinger, Franz; Bergmann, Helmar

    2003-01-01

    Based on the Varioscope, a commercially available head-mounted operating binocular, we have developed the Varioscope AR, a see through head-mounted display (HMD) for augmented reality visualization that seamlessly fits into the infrastructure of a surgical navigation system. We have assessed the extent to which stereoscopic visualization improves target localization in computer-aided surgery in a phantom study. In order to quantify the depth perception of a user aiming at a given target, we have designed a phantom simulating typical clinical situations in skull base surgery. Sixteen steel spheres were fixed at the base of a bony skull, and several typical craniotomies were applied. After having taken CT scans, the skull was filled with opaque jelly in order to simulate brain tissue. The positions of the spheres were registered using VISIT, a system for computer-aided surgical navigation. Then attempts were made to locate the steel spheres with a bayonet probe through the craniotomies using VISIT and the Varioscope AR as a stereoscopic display device. Localization of targets 4 mm in diameter using stereoscopic vision and additional visual cues indicating target proximity had a success rate (defined as a first-trial hit rate) of 87.5%. Using monoscopic vision and target proximity indication, the success rate was found to be 66.6%. Omission of visual hints on reaching a target yielded a success rate of 79.2% in the stereo case and 56.25% with monoscopic vision. Time requirements for localizing all 16 targets ranged from 7.5 min (stereo, with proximity cues) to 10 min (mono, without proximity cues). Navigation error is primarily governed by the accuracy of registration in the navigation system, whereas the HMD does not appear to influence localization significantly. We conclude that stereo vision is a valuable tool in augmented reality guided interventions. (note)

  5. Decrease in monocular sleep after sleep deprivation in the domestic chicken

    NARCIS (Netherlands)

    Boerema, AS; Riedstra, B; Strijkstra, AM

    2003-01-01

    We investigated the trade-off between sleep need and alertness, by challenging chickens to modify their monocular sleep. We sleep deprived domestic chickens (Gallus domesticus) to increase their sleep need. We found that in response to sleep deprivation the fraction of monocular sleep within sleep

  6. Real-time Pedestrian Crossing Recognition for Assistive Outdoor Navigation.

    Science.gov (United States)

    Fontanesi, Simone; Frigerio, Alessandro; Fanucci, Luca; Li, William

    2015-01-01

    Navigation in urban environments can be difficult for people who are blind or visually impaired. In this project, we present a system and algorithms for recognizing pedestrian crossings in outdoor environments. Our goal is to provide navigation cues for crossing the street and reaching an island or sidewalk safely. Using a state-of-the-art Multisense S7S sensor, we collected 3D pointcloud data for real-time detection of pedestrian crossing and generation of directional guidance. We demonstrate improvements to a baseline, monocular-camera-based system by integrating 3D spatial prior information extracted from the pointcloud. Our system's parameters can be set to the actual dimensions of real-world settings, which enables robustness of occlusion and perspective transformation. The system works especially well in non-occlusion situations, and is reasonably accurate under different kind of conditions. As well, our large dataset of pedestrian crossings, organized by different types and situations of pedestrian crossings in order to reflect real-word environments, is publicly available in a commonly used format (ROS bagfiles) for further research.

  7. Enabling Autonomous Navigation for Affordable Scooters

    Directory of Open Access Journals (Sweden)

    Kaikai Liu

    2018-06-01

    Full Text Available Despite the technical success of existing assistive technologies, for example, electric wheelchairs and scooters, they are still far from effective enough in helping those in need navigate to their destinations in a hassle-free manner. In this paper, we propose to improve the safety and autonomy of navigation by designing a cutting-edge autonomous scooter, thus allowing people with mobility challenges to ambulate independently and safely in possibly unfamiliar surroundings. We focus on indoor navigation scenarios for the autonomous scooter where the current location, maps, and nearby obstacles are unknown. To achieve semi-LiDAR functionality, we leverage the gyros-based pose data to compensate the laser motion in real time and create synthetic mapping of simple environments with regular shapes and deep hallways. Laser range finders are suitable for long ranges with limited resolution. Stereo vision, on the other hand, provides 3D structural data of nearby complex objects. To achieve simultaneous fine-grained resolution and long range coverage in the mapping of cluttered and complex environments, we dynamically fuse the measurements from the stereo vision camera system, the synthetic laser scanner, and the LiDAR. We propose solutions to self-correct errors in data fusion and create a hybrid map to assist the scooter in achieving collision-free navigation in an indoor environment.

  8. Design of and normative data for a new computer based test of ocular torsion.

    Science.gov (United States)

    Vaswani, Reena S; Mudgil, Ananth V

    2004-01-01

    To evaluate a new clinically practical and dynamic test for quantifying torsional binocular eye alignment changes which may occur in the change from monocular to binocular viewing conditions. The test was developed using a computer with Lotus Freelance Software, binoculars with prisms and colored filters. The subject looks through binoculars at the computer screen two meters away. For monocular vision, six concentric blue circles, a blue horizontal line and a tilted red line were displayed on the screen. For binocular vision, white circles replaced blue circles. The subject was asked to orient the lines parallel to each other. The difference in tilt (degrees) between the subjective parallel and fixed horizontal position is the torsional alignment of the eye. The time to administer the test was approximately two minutes. In 70 Normal subjects, average age 16 years, the mean degree of cyclodeviation tilt in the right eye was 0.6 degrees for monocular viewing conditions and 0.7 degrees for binocular viewing conditions, with a standard deviation of approximately one degree. There was no "statistically significant" difference between monocular and binocular viewing. This computer based test is a simple, computerized, non-invasive test that has a potential for use in the diagnosis of cyclovertical strabismus. Currently, there is no commercially available test for this purpose.

  9. Separating monocular and binocular neural mechanisms mediating chromatic contextual interactions.

    Science.gov (United States)

    D'Antona, Anthony D; Christiansen, Jens H; Shevell, Steven K

    2014-04-17

    When seen in isolation, a light that varies in chromaticity over time is perceived to oscillate in color. Perception of that same time-varying light may be altered by a surrounding light that is also temporally varying in chromaticity. The neural mechanisms that mediate these contextual interactions are the focus of this article. Observers viewed a central test stimulus that varied in chromaticity over time within a larger surround that also varied in chromaticity at the same temporal frequency. Center and surround were presented either to the same eye (monocular condition) or to opposite eyes (dichoptic condition) at the same frequency (3.125, 6.25, or 9.375 Hz). Relative phase between center and surround modulation was varied. In both the monocular and dichoptic conditions, the perceived modulation depth of the central light depended on the relative phase of the surround. A simple model implementing a linear combination of center and surround modulation fit the measurements well. At the lowest temporal frequency (3.125 Hz), the surround's influence was virtually identical for monocular and dichoptic conditions, suggesting that at this frequency, the surround's influence is mediated primarily by a binocular neural mechanism. At higher frequencies, the surround's influence was greater for the monocular condition than for the dichoptic condition, and this difference increased with temporal frequency. Our findings show that two separate neural mechanisms mediate chromatic contextual interactions: one binocular and dominant at lower temporal frequencies and the other monocular and dominant at higher frequencies (6-10 Hz).

  10. Development of an advanced intelligent robot navigation system

    International Nuclear Information System (INIS)

    Hai Quan Dai; Dalton, G.R.; Tulenko, J.; Crane, C.C. III

    1992-01-01

    As part of the US Department of Energy's Robotics for Advanced Reactors Project, the authors are in the process of assembling an advanced intelligent robotic navigation and control system based on previous work performed on this project in the areas of computer control, database access, graphical interfaces, shared data and computations, computer vision for positions determination, and sonar-based computer navigation systems. The system will feature three levels of goals: (1) high-level system for management of lower level functions to achieve specific functional goals; (2) intermediate level of goals such as position determination, obstacle avoidance, and discovering unexpected objects; and (3) other supplementary low-level functions such as reading and recording sonar or video camera data. In its current phase, the Cybermotion K2A mobile robot is not equipped with an onboard computer system, which will be included in the final phase. By that time, the onboard system will play important roles in vision processing and in robotic control communication

  11. A Depth-Based Head-Mounted Visual Display to Aid Navigation in Partially Sighted Individuals

    Science.gov (United States)

    Hicks, Stephen L.; Wilson, Iain; Muhammed, Louwai; Worsfold, John; Downes, Susan M.; Kennard, Christopher

    2013-01-01

    Independent navigation for blind individuals can be extremely difficult due to the inability to recognise and avoid obstacles. Assistive techniques such as white canes, guide dogs, and sensory substitution provide a degree of situational awareness by relying on touch or hearing but as yet there are no techniques that attempt to make use of any residual vision that the individual is likely to retain. Residual vision can restricted to the awareness of the orientation of a light source, and hence any information presented on a wearable display would have to limited and unambiguous. For improved situational awareness, i.e. for the detection of obstacles, displaying the size and position of nearby objects, rather than including finer surface details may be sufficient. To test whether a depth-based display could be used to navigate a small obstacle course, we built a real-time head-mounted display with a depth camera and software to detect the distance to nearby objects. Distance was represented as brightness on a low-resolution display positioned close to the eyes without the benefit focussing optics. A set of sighted participants were monitored as they learned to use this display to navigate the course. All were able to do so, and time and velocity rapidly improved with practise with no increase in the number of collisions. In a second experiment a cohort of severely sight-impaired individuals of varying aetiologies performed a search task using a similar low-resolution head-mounted display. The majority of participants were able to use the display to respond to objects in their central and peripheral fields at a similar rate to sighted controls. We conclude that the skill to use a depth-based display for obstacle avoidance can be rapidly acquired and the simplified nature of the display may appropriate for the development of an aid for sight-impaired individuals. PMID:23844067

  12. The prevalence and causes of decreased visual acuity – a study based on vision screening conducted at Enukweni and Mzuzu Foundation Primary Schools, Malawi

    Directory of Open Access Journals (Sweden)

    Thom L

    2016-12-01

    Full Text Available Leaveson Thom,1 Sanchia Jogessar,1,2 Sara L McGowan,1 Fiona Lawless,1,2 1Department of Optometry, Mzuzu University, Mzuzu, Malawi; 2Brienholden Vision Institute, Durban, South Africa Aim: To determine the prevalence and causes of decreased visual acuity (VA among pupils recruited in two primary schools in Mzimba district, northern region of Malawi.Materials and methods: The study was based on the vision screening which was conducted by optometrists at Enukweni and Mzuzu Foundation Primary Schools. The measurements during the screening included unaided distance monocular VA by using Low Vision Resource Center and Snellen chart, pinhole VA on any subject with VA of less than 6/6, refraction, pupil evaluations, ocular movements, ocular health, and shadow test.Results: The prevalence of decreased VA was found to be low in school-going population (4%, n=594. Even though Enukweni Primary School had few participants than Mzuzu Foundation Primary School, it had high prevalence of decreased VA (5.8%, n=275 than Mzuzu Foundation Primary School (1.8%, n=319. The principal causes of decreased VA in this study were found to be amblyopia and uncorrected refractive errors, with myopia being the main cause than hyperopia.Conclusion: Based on the low prevalence of decreased VA due to myopia or hyperopia, it should not be concluded that refractive errors are an insignificant contributor to visual disability in Malawi. More vision screenings are required at a large scale on school-aged population to reflect the real situation on the ground. Cost-effective strategies are needed to address this easily treatable cause of vision impairment. Keywords: vision screening, refractive errors, visual acuity, Enukweni, Mzuzu foundation

  13. Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas.

    Science.gov (United States)

    Gakne, Paul Verlaine; O'Keefe, Kyle

    2018-04-17

    This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites.

  14. Stereo-vision-based cooperative-vehicle positioning using OCC and neural networks

    Science.gov (United States)

    Ifthekhar, Md. Shareef; Saha, Nirzhar; Jang, Yeong Min

    2015-10-01

    Vehicle positioning has been subjected to extensive research regarding driving safety measures and assistance as well as autonomous navigation. The most common positioning technique used in automotive positioning is the global positioning system (GPS). However, GPS is not reliably accurate because of signal blockage caused by high-rise buildings. In addition, GPS is error prone when a vehicle is inside a tunnel. Moreover, GPS and other radio-frequency-based approaches cannot provide orientation information or the position of neighboring vehicles. In this study, we propose a cooperative-vehicle positioning (CVP) technique by using the newly developed optical camera communications (OCC). The OCC technique utilizes image sensors and cameras to receive and decode light-modulated information from light-emitting diodes (LEDs). A vehicle equipped with an OCC transceiver can receive positioning and other information such as speed, lane change, driver's condition, etc., through optical wireless links of neighboring vehicles. Thus, the target vehicle position that is too far away to establish an OCC link can be determined by a computer-vision-based technique combined with the cooperation of neighboring vehicles. In addition, we have devised a back-propagation (BP) neural-network learning method for positioning and range estimation for CVP. The proposed neural-network-based technique can estimate target vehicle position from only two image points of target vehicles using stereo vision. For this, we use rear LEDs on target vehicles as image points. We show from simulation results that our neural-network-based method achieves better accuracy than that of the computer-vision method.

  15. An Integrated Vision-Based System for Spacecraft Attitude and Topology Determination for Formation Flight Missions

    Science.gov (United States)

    Rogers, Aaron; Anderson, Kalle; Mracek, Anna; Zenick, Ray

    2004-01-01

    With the space industry's increasing focus upon multi-spacecraft formation flight missions, the ability to precisely determine system topology and the orientation of member spacecraft relative to both inertial space and each other is becoming a critical design requirement. Topology determination in satellite systems has traditionally made use of GPS or ground uplink position data for low Earth orbits, or, alternatively, inter-satellite ranging between all formation pairs. While these techniques work, they are not ideal for extension to interplanetary missions or to large fleets of decentralized, mixed-function spacecraft. The Vision-Based Attitude and Formation Determination System (VBAFDS) represents a novel solution to both the navigation and topology determination problems with an integrated approach that combines a miniature star tracker with a suite of robust processing algorithms. By combining a single range measurement with vision data to resolve complete system topology, the VBAFDS design represents a simple, resource-efficient solution that is not constrained to certain Earth orbits or formation geometries. In this paper, analysis and design of the VBAFDS integrated guidance, navigation and control (GN&C) technology will be discussed, including hardware requirements, algorithm development, and simulation results in the context of potential mission applications.

  16. Monocular depth effects on perceptual fading.

    Science.gov (United States)

    Hsu, Li-Chuan; Kramer, Peter; Yeh, Su-Ling

    2010-08-06

    After prolonged viewing, a static target among moving non-targets is perceived to repeatedly disappear and reappear. An uncrossed stereoscopic disparity of the target facilitates this Motion-Induced Blindness (MIB). Here we test whether monocular depth cues can affect MIB too, and whether they can also affect perceptual fading in static displays. Experiment 1 reveals an effect of interposition: more MIB when the target appears partially covered by, than when it appears to cover, its surroundings. Experiment 2 shows that the effect is indeed due to interposition and not to the target's contours. Experiment 3 induces depth with the watercolor illusion and replicates Experiment 1. Experiments 4 and 5 replicate Experiments 1 and 3 without the use of motion. Since almost any stimulus contains a monocular depth cue, we conclude that perceived depth affects perceptual fading in almost any stimulus, whether dynamic or static. Copyright 2010 Elsevier Ltd. All rights reserved.

  17. The orientation of homing pigeons (Columba livia f.d. with and without navigational experience in a two-dimensional environment.

    Directory of Open Access Journals (Sweden)

    Julia Mehlhorn

    Full Text Available Homing pigeons are known for their excellent homing ability, and their brains seem to be functionally adapted to homing. It is known that pigeons with navigational experience show a larger hippocampus and also a more lateralised brain than pigeons without navigational experience. So we hypothesized that experience may have an influence also on orientation ability. We examined two groups of pigeons (11 with navigational experience and 17 without in a standard operant chamber with a touch screen monitor showing a 2-D schematic of a rectangular environment (as "geometric" information and one uniquely shaped and colored feature in each corner (as "landmark" information. Pigeons were trained first for pecking on one of these features and then we examined their ability to encode geometric and landmark information in four tests by modifying the rectangular environment. All tests were done under binocular and monocular viewing to test hemispheric dominance. The number of pecks was counted for analysis. Results show that generally both groups orientate on the basis of landmarks and the geometry of environment, but landmark information was preferred. Pigeons with navigational experience did not perform better on the tests but showed a better conjunction of the different kinds of information. Significant differences between monocular and binocular viewing were detected particularly in pigeons without navigational experience on two tests with reduced information. Our data suggest that the conjunction of geometric and landmark information might be integrated after processing separately in each hemisphere and that this process is influenced by experience.

  18. The orientation of homing pigeons (Columba livia f.d.) with and without navigational experience in a two-dimensional environment.

    Science.gov (United States)

    Mehlhorn, Julia; Rehkaemper, Gerd

    2017-01-01

    Homing pigeons are known for their excellent homing ability, and their brains seem to be functionally adapted to homing. It is known that pigeons with navigational experience show a larger hippocampus and also a more lateralised brain than pigeons without navigational experience. So we hypothesized that experience may have an influence also on orientation ability. We examined two groups of pigeons (11 with navigational experience and 17 without) in a standard operant chamber with a touch screen monitor showing a 2-D schematic of a rectangular environment (as "geometric" information) and one uniquely shaped and colored feature in each corner (as "landmark" information). Pigeons were trained first for pecking on one of these features and then we examined their ability to encode geometric and landmark information in four tests by modifying the rectangular environment. All tests were done under binocular and monocular viewing to test hemispheric dominance. The number of pecks was counted for analysis. Results show that generally both groups orientate on the basis of landmarks and the geometry of environment, but landmark information was preferred. Pigeons with navigational experience did not perform better on the tests but showed a better conjunction of the different kinds of information. Significant differences between monocular and binocular viewing were detected particularly in pigeons without navigational experience on two tests with reduced information. Our data suggest that the conjunction of geometric and landmark information might be integrated after processing separately in each hemisphere and that this process is influenced by experience.

  19. Effect of Monocular Deprivation on Rabbit Neural Retinal Cell Densities

    OpenAIRE

    Mwachaka, Philip Maseghe; Saidi, Hassan; Odula, Paul Ochieng; Mandela, Pamela Idenya

    2015-01-01

    Purpose: To describe the effect of monocular deprivation on densities of neural retinal cells in rabbits. Methods: Thirty rabbits, comprised of 18 subject and 12 control animals, were included and monocular deprivation was achieved through unilateral lid suturing in all subject animals. The rabbits were observed for three weeks. At the end of each week, 6 experimental and 3 control animals were euthanized, their retinas was harvested and processed for light microscopy. Photomicrographs of ...

  20. Adaptive Landmark-Based Navigation System Using Learning Techniques

    DEFF Research Database (Denmark)

    Zeidan, Bassel; Dasgupta, Sakyasingha; Wörgötter, Florentin

    2014-01-01

    The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal. In...... hexapod robots. As a result, it allows the robots to successfully learn to navigate to distal goals in complex environments.......The goal-directed navigational ability of animals is an essential prerequisite for them to survive. They can learn to navigate to a distal goal in a complex environment. During this long-distance navigation, they exploit environmental features, like landmarks, to guide them towards their goal....... Inspired by this, we develop an adaptive landmark-based navigation system based on sequential reinforcement learning. In addition, correlation-based learning is also integrated into the system to improve learning performance. The proposed system has been applied to simulated simple wheeled and more complex...

  1. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    Directory of Open Access Journals (Sweden)

    Taekjun Oh

    2015-07-01

    Full Text Available Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

  2. VISION BASED OBSTACLE DETECTION IN UAV IMAGING

    Directory of Open Access Journals (Sweden)

    S. Badrloo

    2017-08-01

    Full Text Available Detecting and preventing incidence with obstacles is crucial in UAV navigation and control. Most of the common obstacle detection techniques are currently sensor-based. Small UAVs are not able to carry obstacle detection sensors such as radar; therefore, vision-based methods are considered, which can be divided into stereo-based and mono-based techniques. Mono-based methods are classified into two groups: Foreground-background separation, and brain-inspired methods. Brain-inspired methods are highly efficient in obstacle detection; hence, this research aims to detect obstacles using brain-inspired techniques, which try to enlarge the obstacle by approaching it. A recent research in this field, has concentrated on matching the SIFT points along with, SIFT size-ratio factor and area-ratio of convex hulls in two consecutive frames to detect obstacles. This method is not able to distinguish between near and far obstacles or the obstacles in complex environment, and is sensitive to wrong matched points. In order to solve the above mentioned problems, this research calculates the dist-ratio of matched points. Then, each and every point is investigated for Distinguishing between far and close obstacles. The results demonstrated the high efficiency of the proposed method in complex environments.

  3. A Dataset for Visual Navigation with Neuromorphic Methods

    Directory of Open Access Journals (Sweden)

    Francisco eBarranco

    2016-02-01

    Full Text Available Standardized benchmarks in Computer Vision have greatly contributed to the advance of approaches to many problems in the field. If we want to enhance the visibility of event-driven vision and increase its impact, we will need benchmarks that allow comparison among different neuromorphic methods as well as comparison to Computer Vision conventional approaches. We present datasets to evaluate the accuracy of frame-free and frame-based approaches for tasks of visual navigation. Similar to conventional Computer Vision datasets, we provide synthetic and real scenes, with the synthetic data created with graphics packages, and the real data recorded using a mobile robotic platform carrying a dynamic and active pixel vision sensor (DAVIS and an RGB+Depth sensor. For both datasets the cameras move with a rigid motion in a static scene, and the data includes the images, events, optic flow, 3D camera motion, and the depth of the scene, along with calibration procedures. Finally, we also provide simulated event data generated synthetically from well-known frame-based optical flow datasets.

  4. Action Control: Independent Effects of Memory and Monocular Viewing on Reaching Accuracy

    Science.gov (United States)

    Westwood, D.A.; Robertson, C.; Heath, M.

    2005-01-01

    Evidence suggests that perceptual networks in the ventral visual pathway are necessary for action control when targets are viewed with only one eye, or when the target must be stored in memory. We tested whether memory-linked (i.e., open-loop versus memory-guided actions) and monocular-linked effects (i.e., binocular versus monocular actions) on…

  5. Prism therapy and visual rehabilitation in homonymous visual field loss.

    LENUS (Irish Health Repository)

    O'Neill, Evelyn C

    2012-02-01

    PURPOSE: Homonymous visual field defects (HVFD) are common and frequently occur after cerebrovascular accidents. They significantly impair visual function and cause disability particularly with regard to visual exploration. The purpose of this study was to assess a novel interventional treatment of monocular prism therapy on visual functioning in patients with HVFD of varied etiology using vision targeted, health-related quality of life (QOL) questionnaires. Our secondary aim was to confirm monocular and binocular visual field expansion pre- and posttreatment. METHODS: Twelve patients with acquired, documented HVFD were eligible to be included. All patients underwent specific vision-targeted, health-related QOL questionnaire and monocular and binocular Goldmann perimetry before commencing prism therapy. Patients were fitted with monocular prisms on the side of the HVFD with the base-in the direction of the field defect creating a peripheral optical exotropia and field expansion. After the treatment period, QOL questionnaires and perimetry were repeated. RESULTS: Twelve patients were included in the treatment group, 10 of whom were included in data analysis. Overall, there was significant improvement within multiple vision-related, QOL functioning parameters, specifically within the domains of general health (p < 0.01), general vision (p < 0.05), distance vision (p < 0.01), peripheral vision (p < 0.05), role difficulties (p < 0.05), dependency (p < 0.05), and social functioning (p < 0.05). Visual field expansion was shown when measured monocularly and binocularly during the study period in comparison with pretreatment baselines. CONCLUSIONS: Patients with HVFD demonstrate decreased QOL. Monocular sector prisms can improve the QOL and expand the visual field in these patients.

  6. Low Cost Vision Based Personal Mobile Mapping System

    Science.gov (United States)

    Amami, M. M.; Smith, M. J.; Kokkas, N.

    2014-03-01

    Mobile mapping systems (MMS) can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS). A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  7. Low Cost Vision Based Personal Mobile Mapping System

    Directory of Open Access Journals (Sweden)

    M. M. Amami

    2014-03-01

    Full Text Available Mobile mapping systems (MMS can be used for several purposes, such as transportation, highway infrastructure mapping and GIS data collecting. However, the acceptance of these systems is not wide spread and their use is still limited due the high cost and dependency on the Global Navigation Satellite System (GNSS. A low cost vision based personal MMS has been produced with an aim to overcome these limitations. The system has been designed to depend mainly on cameras and use of low cost GNSS and inertial sensors to provide a bundle adjustment solution with initial values. The system has the potential to be used indoor and outdoor. The system has been tested indoors and outdoors with different GPS coverage, surrounded features, and narrow and curvy paths. Tests show that the system is able to work in such environments providing 3D coordinates of better than 10 cm accuracy.

  8. The effects of left and right monocular viewing on hemispheric activation.

    Science.gov (United States)

    Wang, Chao; Burtis, D Brandon; Ding, Mingzhou; Mo, Jue; Williamson, John B; Heilman, Kenneth M

    2018-03-01

    Prior research has revealed that whereas activation of the left hemisphere primarily increases the activity of the parasympathetic division of the autonomic nervous system, right-hemisphere activation increases the activity of the sympathetic division. In addition, each hemisphere primarily receives retinocollicular projections from the contralateral eye. A prior study reported that pupillary dilation was greater with left- than with right-eye monocular viewing. The goal of this study was to test the alternative hypotheses that this asymmetric pupil dilation with left-eye viewing was induced by activation of the right-hemispheric-mediated sympathetic activity, versus a reduction of left-hemisphere-mediated parasympathetic activity. Thus, this study was designed to learn whether there are changes in hemispheric activation, as measured by alteration of spontaneous alpha activity, during right versus left monocular viewing. High-density electroencephalography (EEG) was recorded from healthy participants viewing a crosshair with their right, left, or both eyes. There was a significantly less alpha power over the right hemisphere's parietal-occipital area with left and binocular viewing than with right-eye monocular viewing. The greater relative reduction of right-hemisphere alpha activity during left than during right monocular viewing provides further evidence that left-eye viewing induces greater increase in right-hemisphere activation than does right-eye viewing.

  9. Visual servo control for a human-following robot

    CSIR Research Space (South Africa)

    Burke, Michael G

    2011-03-01

    Full Text Available This thesis presents work completed on the design of control and vision components for use in a monocular vision-based human-following robot. The use of vision in a controller feedback loop is referred to as vision-based or visual servo control...

  10. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, J.J.; Albertazzi, L.; Doorn, A.J. van; Ee, R. van; Grind, W.A. van de; Kappers, A.M.L.; Lappin, J.S.; Norman, J.F.; Oomes, A.H.J.; Pas, S.F. te; Phillips, F.; Pont, S.C.; Richards, W.A.; Todd, J.T.; Verstraten, F.A.J.; Vries, S.C. de

    2010-01-01

    The issue of the existence of planes—understood as the carriers of a nexus of straight lines—in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  11. Does monocular visual space contain planes?

    NARCIS (Netherlands)

    Koenderink, Jan J.; Albertazzi, Liliana; van Doorn, Andrea J.; van Ee, Raymond; van de Grind, Wim A.; Kappers, Astrid M L; Lappin, Joe S.; Farley Norman, J.; (Stijn) Oomes, A. H J; te Pas, Susan P.; Phillips, Flip; Pont, Sylvia C.; Richards, Whitman A.; Todd, James T.; Verstraten, Frans A J; de Vries, Sjoerd

    The issue of the existence of planes-understood as the carriers of a nexus of straight lines-in the monocular visual space of a stationary human observer has never been addressed. The most recent empirical data apply to binocular visual space and date from the 1960s (Foley, 1964). This appears to be

  12. Linear study and bundle adjustment data fusion; Application to vision localization

    International Nuclear Information System (INIS)

    Michot, J.

    2010-01-01

    The works presented in this manuscript are in the field of computer vision, and tackle the problem of real-time vision based localization and 3D reconstruction. In this context, the trajectory of a camera and the 3D structure of the filmed scene are initially estimated by linear algorithms and then optimized by a nonlinear algorithm, bundle adjustment. The thesis first presents a new technique of line search, dedicated to the nonlinear minimization algorithms used in Structure-from-Motion. The proposed technique is not iterative and can be quickly installed in traditional bundle adjustment frameworks. This technique, called Global Algebraic Line Search (G-ALS), and its two-dimensional variant (Two way-ALS), accelerate the convergence of the bundle adjustment algorithm. The approximation of the re-projection error by an algebraic distance enables the analytical calculation of an effective displacement amplitude (or two amplitudes for the Two way-ALS variant) by solving a degree 3 (G-ALS) or 5 (Two way-ALS) polynomial. Our experiments, conducted on simulated and real data, show that this amplitude, which is optimal for the algebraic distance, is also efficient for the Euclidean distance and reduces the convergence time of minimizations. One difficulty of real-time tracking algorithms (monocular SLAM) is that the estimated trajectory is often affected by drifts: on the absolute orientation, position and scale. Since these algorithms are incremental, errors and approximations are accumulated throughout the trajectory and cause global drifts. In addition, a tracking vision system can always be dazzled or used under conditions which prevented temporarily to calculate the location of the system. To solve these problems, we propose to use an additional sensor measuring the displacement of the camera. The type of sensor used will vary depending on the targeted application (an odometer for a vehicle, a lightweight inertial navigation system for a person). We propose to

  13. SLS Model Based Design: A Navigation Perspective

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Park, Thomas; Geohagan, Kevin

    2018-01-01

    The SLS Program has implemented a Model-based Design (MBD) and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team is responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1B design, the additional GPS Receiver hardware model is managed as a DMM at the vehicle design level. This paper describes the models, and discusses the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the navigation components.

  14. Biologically based machine vision: signal analysis of monopolar cells in the visual system of Musca domestica.

    Science.gov (United States)

    Newton, Jenny; Barrett, Steven F; Wilcox, Michael J; Popp, Stephanie

    2002-01-01

    Machine vision for navigational purposes is a rapidly growing field. Many abilities such as object recognition and target tracking rely on vision. Autonomous vehicles must be able to navigate in dynamic enviroments and simultaneously locate a target position. Traditional machine vision often fails to react in real time because of large computational requirements whereas the fly achieves complex orientation and navigation with a relatively small and simple brain. Understanding how the fly extracts visual information and how neurons encode and process information could lead us to a new approach for machine vision applications. Photoreceptors in the Musca domestica eye that share the same spatial information converge into a structure called the cartridge. The cartridge consists of the photoreceptor axon terminals and monopolar cells L1, L2, and L4. It is thought that L1 and L2 cells encode edge related information relative to a single cartridge. These cells are thought to be equivalent to vertebrate bipolar cells, producing contrast enhancement and reduction of information sent to L4. Monopolar cell L4 is thought to perform image segmentation on the information input from L1 and L2 and also enhance edge detection. A mesh of interconnected L4's would correlate the output from L1 and L2 cells of adjacent cartridges and provide a parallel network for segmenting an object's edges. The focus of this research is to excite photoreceptors of the common housefly, Musca domestica, with different visual patterns. The electrical response of monopolar cells L1, L2, and L4 will be recorded using intracellular recording techniques. Signal analysis will determine the neurocircuitry to detect and segment images.

  15. A Vision/Inertia Integrated Positioning Method Using Position and Orientation Matching

    Directory of Open Access Journals (Sweden)

    Xiaoyue Zhang

    2017-01-01

    Full Text Available A vision/inertia integrated positioning method using position and orientation matching which can be adopted on intelligent vehicle such as automated guided vehicle (AGV and mobile robot is proposed in this work. The method is introduced firstly. Landmarks are placed into the navigation field and camera and inertial measurement unit (IMU are installed on the vehicle. Vision processor calculates the azimuth and position information from the pictures which include artificial landmarks with the known direction and position. Inertial navigation system (INS calculates the azimuth and position of vehicle in real time and the calculated pixel position of landmark can be computed from the INS output position. Then the needed mathematical models are established and integrated navigation is implemented by Kalman filter with the observation of azimuth and the calculated pixel position of landmark. Navigation errors and IMU errors are estimated and compensated in real time so that high precision navigation results can be got. Finally, simulation and test are performed, respectively. Both simulation and test results prove that this vision/inertia integrated positioning method using position and orientation matching has feasibility and it can achieve centimeter-level autonomic continuous navigation.

  16. Manifolds for pose tracking from monocular video

    Science.gov (United States)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2015-03-01

    We formulate a simple human-pose tracking theory from monocular video based on the fundamental relationship between changes in pose and image motion vectors. We investigate the natural embedding of the low-dimensional body pose space into a high-dimensional space of body configurations that behaves locally in a linear manner. The embedded manifold facilitates the decomposition of the image motion vectors into basis motion vector fields of the tangent space to the manifold. This approach benefits from the style invariance of image motion flow vectors, and experiments to validate the fundamental theory show reasonable accuracy (within 4.9 deg of the ground truth).

  17. Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Jamal Atman

    2016-09-01

    Full Text Available Micro Air Vehicles (MAVs equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS. In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV’s navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results.

  18. Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles.

    Science.gov (United States)

    Atman, Jamal; Popp, Manuel; Ruppelt, Jan; Trommer, Gert F

    2016-09-16

    Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV's navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results.

  19. A fuzzy logic based navigation for mobile robot

    International Nuclear Information System (INIS)

    Adel Ali S Al-Jumaily; Shamsudin M Amin; Mohamed Khalil

    1998-01-01

    The main issue of intelligent robot is how to reach its goal safely in real time when it moves in unknown environment. The navigational planning is becoming the central issue in development of real-time autonomous mobile robots. Behaviour based robots have been successful in reacting with dynamic environment but still there are some complexity and challenging problems. Fuzzy based behaviours present as powerful method to solve the real time reactive navigation problems in unknown environment. We shall classify the navigation generation methods, five some characteristics of these methods, explain why fuzzy logic is suitable for the navigation of mobile robot and automated guided vehicle, and describe a reactive navigation that is flexible to react through their behaviours to the change of the environment. Some simulation results will be presented to show the navigation of the robot. (Author)

  20. SLS Navigation Model-Based Design Approach

    Science.gov (United States)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and

  1. Three-dimensional vision enhances task performance independently of the surgical method.

    Science.gov (United States)

    Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A

    2012-10-01

    Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.

  2. Landing performance by low-time private pilots after the sudden loss of binocular vision - Cyclops II

    Science.gov (United States)

    Lewis, C. E., Jr.; Swaroop, R.; Mcmurty, T. C.; Blakeley, W. R.; Masters, R. L.

    1973-01-01

    Study of low-time general aviation pilots, who, in a series of spot landings, were suddenly deprived of binocular vision by patching either eye on the downwind leg of a standard, closed traffic pattern. Data collected during these landings were compared with control data from landings flown with normal vision during the same flight. The sequence of patching and the mix of control and monocular landings were randomized to minimize the effect of learning. No decrease in performance was observed during landings with vision restricted to one eye, in fact, performance improved. This observation is reported at a high level of confidence (p less than 0.001). These findings confirm the previous work of Lewis and Krier and have important implications with regard to aeromedical certification standards.

  3. Effect of Monocular Deprivation on Rabbit Neural Retinal Cell Densities.

    Science.gov (United States)

    Mwachaka, Philip Maseghe; Saidi, Hassan; Odula, Paul Ochieng; Mandela, Pamela Idenya

    2015-01-01

    To describe the effect of monocular deprivation on densities of neural retinal cells in rabbits. Thirty rabbits, comprised of 18 subject and 12 control animals, were included and monocular deprivation was achieved through unilateral lid suturing in all subject animals. The rabbits were observed for three weeks. At the end of each week, 6 experimental and 3 control animals were euthanized, their retinas was harvested and processed for light microscopy. Photomicrographs of the retina were taken and imported into FIJI software for analysis. Neural retinal cell densities of deprived eyes were reduced along with increasing period of deprivation. The percentage of reductions were 60.9% (P < 0.001), 41.6% (P = 0.003), and 18.9% (P = 0.326) for ganglion, inner nuclear, and outer nuclear cells, respectively. In non-deprived eyes, cell densities in contrast were increased by 116% (P < 0.001), 52% (P < 0.001) and 59.6% (P < 0.001) in ganglion, inner nuclear, and outer nuclear cells, respectively. In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  4. Intelligent personal navigator supported by knowledge-based systems for estimating dead reckoning navigation parameters

    Science.gov (United States)

    Moafipoor, Shahram

    Personal navigators (PN) have been studied for about a decade in different fields and applications, such as safety and rescue operations, security and emergency services, and police and military applications. The common goal of all these applications is to provide precise and reliable position, velocity, and heading information of each individual in various environments. In the PN system developed in this dissertation, the underlying assumption is that the system does not require pre-existing infrastructure to enable pedestrian navigation. To facilitate this capability, a multisensor system concept, based on the Global Positioning System (GPS), inertial navigation, barometer, magnetometer, and a human pedometry model has been developed. An important aspect of this design is to use the human body as navigation sensor to facilitate Dead Reckoning (DR) navigation in GPS-challenged environments. The system is designed predominantly for outdoor environments, where occasional loss of GPS lock may happen; however, testing and performance demonstration have been extended to indoor environments. DR navigation is based on a relative-measurement approach, with the key idea of integrating the incremental motion information in the form of step direction (SD) and step length (SL) over time. The foundation of the intelligent navigation system concept proposed here rests in exploiting the human locomotion pattern, as well as change of locomotion in varying environments. In this context, the term intelligent navigation represents the transition from the conventional point-to-point DR to dynamic navigation using the knowledge about the mechanism of the moving person. This approach increasingly relies on integrating knowledge-based systems (KBS) and artificial intelligence (AI) methodologies, including artificial neural networks (ANN) and fuzzy logic (FL). In addition, a general framework of the quality control for the real-time validation of the DR processing is proposed, based on a

  5. The minimum test battery to screen for binocular vision anomalies: report 3 of the BAND study.

    Science.gov (United States)

    Hussaindeen, Jameel Rizwana; Rakshit, Archayeeta; Singh, Neeraj Kumar; Swaminathan, Meenakshi; George, Ronnie; Kapur, Suman; Scheiman, Mitchell; Ramani, Krishna Kumar

    2018-03-01

    This study aims to report the minimum test battery needed to screen non-strabismic binocular vision anomalies (NSBVAs) in a community set-up. When large numbers are to be screened we aim to identify the most useful test battery when there is no opportunity for a more comprehensive and time-consuming clinical examination. The prevalence estimates and normative data for binocular vision parameters were estimated from the Binocular Vision Anomalies and Normative Data (BAND) study, following which cut-off estimates and receiver operating characteristic curves to identify the minimum test battery have been plotted. In the receiver operating characteristic phase of the study, children between nine and 17 years of age were screened in two schools in the rural arm using the minimum test battery, and the prevalence estimates with the minimum test battery were found. Receiver operating characteristic analyses revealed that near point of convergence with penlight and red filter (> 7.5 cm), monocular accommodative facility ( 1.25 prism dioptres) were significant factors with cut-off values for best sensitivity and specificity. This minimum test battery was applied to a cohort of 305 children. The mean (standard deviation) age of the subjects was 12.7 (two) years with 121 males and 184 females. Using the minimum battery of tests obtained through the receiver operating characteristic analyses, the prevalence of NSBVAs was found to be 26 per cent. Near point of convergence with penlight and red filter > 10 cm was found to have the highest sensitivity (80 per cent) and specificity (73 per cent) for the diagnosis of convergence insufficiency. For the diagnosis of accommodative infacility, monocular accommodative facility with a cut-off of less than seven cycles per minute was the best predictor for screening (92 per cent sensitivity and 90 per cent specificity). The minimum test battery of near point of convergence with penlight and red filter, difference between distance and near

  6. Role of high-order aberrations in senescent changes in spatial vision

    Energy Technology Data Exchange (ETDEWEB)

    Elliot, S; Choi, S S; Doble, N; Hardy, J L; Evans, J W; Werner, J S

    2009-01-06

    The contributions of optical and neural factors to age-related losses in spatial vision are not fully understood. We used closed-loop adaptive optics to test the visual benefit of correcting monochromatic high-order aberrations (HOAs) on spatial vision for observers ranging in age from 18-81 years. Contrast sensitivity was measured monocularly using a two-alternative forced choice (2AFC) procedure for sinusoidal gratings over 6 mm and 3 mm pupil diameters. Visual acuity was measured using a spatial 4AFC procedure. Over a 6 mm pupil, young observers showed a large benefit of AO at high spatial frequencies, whereas older observers exhibited the greatest benefit at middle spatial frequencies, plus a significantly larger increase in visual acuity. When age-related miosis is controlled, young and old observers exhibited a similar benefit of AO for spatial vision. An increase in HOAs cannot account for the complete senescent decline in spatial vision. These results may indicate a larger role of additional optical factors when the impact of HOAs is removed, but also lend support for the importance of neural factors in age-related changes in spatial vision.

  7. Indoor radar SLAM A radar application for vision and GPS denied environments

    NARCIS (Netherlands)

    Marck, J.W.; Mohamoud, A.A.; Houwen, E.H. van de; Heijster, R.M.E.M. van

    2013-01-01

    Indoor navigation especially in unknown areas is a real challenge. Simultaneous Localization and Mapping (SLAM) technology provides a solution. However SLAM as currently based on optical sensors, is unsuitable in vision denied areas, which are for example encountered by first responders. Radar can

  8. Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter.

    Science.gov (United States)

    Alatise, Mary B; Hancke, Gerhard P

    2017-09-21

    Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).

  9. Teleost polarization vision: how it might work and what it might be good for.

    Science.gov (United States)

    Kamermans, Maarten; Hawryshyn, Craig

    2011-03-12

    In this review, we will discuss the recent literature on fish polarization vision and we will present a model on how the retina processes polarization signals. The model is based on a general retinal-processing scheme and will be compared with the available electrophysiological data on polarization processing in the retina. The results of this model will help illustrate the functional significance of polarization vision for both feeding behaviour and navigation. First, we examine the linkage between structure and function in polarization vision in general.

  10. Shape Perception and Navigation in Blind Adults

    Science.gov (United States)

    Gori, Monica; Cappagli, Giulia; Baud-Bovy, Gabriel; Finocchietti, Sara

    2017-01-01

    Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development. PMID:28144226

  11. Prevalence and causes of low vision and blindness in a rural Southwest Island of Japan: the Kumejima study.

    Science.gov (United States)

    Nakamura, Yuko; Tomidokoro, Atsuo; Sawaguchi, Shoichi; Sakai, Hiroshi; Iwase, Aiko; Araie, Makoto

    2010-12-01

    To determine the prevalence and causes of low vision and blindness in an adult population on a rural southwest island of Japan. Population-based, cross-sectional study. All residents of Kumejima Island, Japan, 40 years of age and older. Of the 4632 residents 40 years of age and older, 3762 (response rate, 81.2%) underwent a detailed ocular examination including measurement of the best-corrected visual acuity (BCVA) with a Landolt ring chart at 5 m. The age- and gender-specific prevalence rates of low vision and blindness were estimated and causes were identified. Low vision and blindness were defined, according to the definition of the World Health Organization, as a BCVA in the better eye below 20/60 to a lower limit of 20/400 and worse than 20/400, respectively. The prevalence of bilateral low vision was 0.58% (95% confidence interval [CI], 0.38-0.89). The primary causes of low vision were cataract (0.11%), corneal opacity (0.08%), retinitis pigmentosa (RP; 0.06%), and diabetic retinopathy (0.06%). The prevalence of bilateral blindness was 0.39% (95% CI, 0.23-0.65). The primary causes of blindness were RP (0.17%) and glaucoma (0.11%). The primary causes of monocular low vision were cataract (0.65%), corneal opacity (0.16%), age-related macular degeneration (0.16%), and diabetic retinopathy (0.11%), whereas those of monocular blindness were cataract (0.29%), trauma (0.25%), and glaucoma (0.22%). Logistic analysis showed that female gender (P = 0.001; odds ratio [OR], 7.37; 95% CI, 2.20-24.71) and lower body weight (P = 0.015; OR, 0.94; 95% CI, 0.90-0.99) were associated significantly with visual impairment. The prevalences of low vision and blindness in the adult residents of an island in southwest Japan were 1.5 to 3 times higher than the prevalences reported in an urban city on the Japanese mainland. The prevalence of visual impairment caused by RP on this island was much higher than on the mainland, suggesting a genetic characteristic of the population

  12. ANALYSIS OF FREE ROUTE AIRSPACE AND PERFORMANCE BASED NAVIGATION IMPLEMENTATION IN THE EUROPEAN AIR NAVIGATION SYSTEM

    Directory of Open Access Journals (Sweden)

    Svetlana Pavlova

    2014-12-01

    Full Text Available European Air Traffic Management system requires continuous improvements as air traffic is increasingday by day. For this purpose it was developed by international organizations Free Route Airspace and PerformanceBased Navigation concepts that allow to offer a required level of safety, capacity, environmental performance alongwith cost-effectiveness. The aim of the article is to provide detailed analysis of Free Route Airspace and PerformanceBased Navigation implementation status within European region including Ukrainian air navigation system.

  13. Deep Hierarchies in the Primate Visual Cortex: What Can We Learn for Computer Vision?

    OpenAIRE

    Kruger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodriguez-Sanchez, Antonio J.; Wiskott, Laurenz

    2013-01-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition or vision-based navigation and manipulation. This article reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer ...

  14. Tundish Cover Flux Thickness Measurement Method and Instrumentation Based on Computer Vision in Continuous Casting Tundish

    Directory of Open Access Journals (Sweden)

    Meng Lu

    2013-01-01

    Full Text Available Thickness of tundish cover flux (TCF plays an important role in continuous casting (CC steelmaking process. Traditional measurement method of TCF thickness is single/double wire methods, which have several problems such as personal security, easily affected by operators, and poor repeatability. To solve all these problems, in this paper, we specifically designed and built an instrumentation and presented a novel method to measure the TCF thickness. The instrumentation was composed of a measurement bar, a mechanical device, a high-definition industrial camera, a Siemens S7-200 programmable logic controller (PLC, and a computer. Our measurement method was based on the computer vision algorithms, including image denoising method, monocular range measurement method, scale invariant feature transform (SIFT, and image gray gradient detection method. Using the present instrumentation and method, images in the CC tundish can be collected by camera and transferred to computer to do imaging processing. Experiments showed that our instrumentation and method worked well at scene of steel plants, can accurately measure the thickness of TCF, and overcome the disadvantages of traditional measurement methods, or even replace the traditional ones.

  15. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia.

    Science.gov (United States)

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-02-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n = 13) were asked to complete two psychophysical supra-threshold binocular summation tasks: (1) binocular phase combination and (2) dichoptic global motion coherence before and after monocular training to investigate this question. We showed that these participants benefited from monocular training in terms of binocular combination. More importantly, the improvements observed with the area under log CSF (AULCSF) were found to be correlated with the improvements in binocular phase combination.

  16. Vision-Based Georeferencing of GPR in Urban Areas

    Directory of Open Access Journals (Sweden)

    Riccardo Barzaghi

    2016-01-01

    Full Text Available Ground Penetrating Radar (GPR surveying is widely used to gather accurate knowledge about the geometry and position of underground utilities. The sensor arrays need to be coupled to an accurate positioning system, like a geodetic-grade Global Navigation Satellite System (GNSS device. However, in urban areas this approach is not always feasible because GNSS accuracy can be substantially degraded due to the presence of buildings, trees, tunnels, etc. In this work, a photogrammetric (vision-based method for GPR georeferencing is presented. The method can be summarized in three main steps: tie point extraction from the images acquired during the survey, computation of approximate camera extrinsic parameters and finally a refinement of the parameter estimation using a rigorous implementation of the collinearity equations. A test under operational conditions is described, where accuracy of a few centimeters has been achieved. The results demonstrate that the solution was robust enough for recovering vehicle trajectories even in critical situations, such as poorly textured framed surfaces, short baselines, and low intersection angles.

  17. Depth of Monocular Elements in a Binocular Scene: The Conditions for da Vinci Stereopsis

    Science.gov (United States)

    Cook, Michael; Gillam, Barbara

    2004-01-01

    Quantitative depth based on binocular resolution of visibility constraints is demonstrated in a novel stereogram representing an object, visible to 1 eye only, and seen through an aperture or camouflaged against a background. The monocular region in the display is attached to the binocular region, so that the stereogram represents an object which…

  18. Dopamine antagonists and brief vision distinguish lens-induced- and form-deprivation-induced myopia

    OpenAIRE

    Nickla, Debora L.; Totonelly, Kristen

    2011-01-01

    In eyes wearing negative lenses, the D2 dopamine antagonist spiperone was only partly effective in preventing the ameliorative effects of brief periods of vision (Nickla et al., 2010), in contrast to reports from studies using form deprivation. The present study was done to directly compare the effects of spiperone, and the D1 antagonist SCH-23390, on the two different myopiagenic paradigms. 12-day old chickens wore monocular diffusers (form deprivation) or − 10 D lenses attached to the feath...

  19. Theoretical Design and First Test in Laboratory of a Composite Visual Servo-Based Target Spray Robotic System

    Directory of Open Access Journals (Sweden)

    Dongjie Zhao

    2016-01-01

    Full Text Available In order to spray onto the canopy of interval planting crop, an approach of using a target spray robot with a composite vision servo system based on monocular scene vision and monocular eye-in-hand vision was proposed. Scene camera was used to roughly locate target crop, and then the image-processing methods for background segmentation, crop canopy centroid extraction, and 3D positioning were studied. Eye-in-hand camera was used to precisely determine spray position of each crop. Based on the center and area of 2D minimum-enclosing-circle (MEC of crop canopy, a method to calculate spray position and spray time was determined. In addition, locating algorithm for the MEC center in nozzle reference frame and the hand-eye calibration matrix were studied. The processing of a mechanical arm guiding nozzle to spray was divided into three stages: reset, alignment, and hovering spray, and servo method of each stage was investigated. For preliminary verification of the theoretical studies on the approach, a simplified experimental prototype containing one spray mechanical arm was built and some performance tests were carried out under controlled environment in laboratory. The results showed that the prototype could achieve the effect of “spraying while moving and accurately spraying on target.”

  20. Distance Estimation by Fusing Radar and Monocular Camera with Kalman Filter

    OpenAIRE

    Feng, Yuxiang; Pickering, Simon; Chappell, Edward; Iravani, Pejman; Brace, Christian

    2017-01-01

    The major contribution of this paper is to propose a low-cost accurate distance estimation approach. It can potentially be used in driver modelling, accident avoidance and autonomous driving. Based on MATLAB and Python, sensory data from a Continental radar and a monocular dashcam were fused using a Kalman filter. Both sensors were mounted on a Volkswagen Sharan, performing repeated driving on a same route. The established system consists of three components, radar data processing, camera dat...

  1. Transient monocular blindness and the risk of vascular complications according to subtype : a prospective cohort study

    NARCIS (Netherlands)

    Volkers, Eline J; Donders, Richard C J M; Koudstaal, Peter J; van Gijn, Jan; Algra, Ale; Jaap Kappelle, L

    Patients with transient monocular blindness (TMB) can present with many different symptoms, and diagnosis is usually based on the history alone. In this study, we assessed the risk of vascular complications according to different characteristics of TMB. We prospectively studied 341 consecutive

  2. Transient monocular blindness and the risk of vascular complications according to subtype: a prospective cohort study

    NARCIS (Netherlands)

    Volkers, E.J. (Eline J.); R. Donders (Rogier); P.J. Koudstaal (Peter Jan); van Gijn, J. (Jan); A. Algra (Ale); L. Jaap Kappelle

    2016-01-01

    textabstractPatients with transient monocular blindness (TMB) can present with many different symptoms, and diagnosis is usually based on the history alone. In this study, we assessed the risk of vascular complications according to different characteristics of TMB. We prospectively studied 341

  3. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Chalimbaud Pierre

    2007-01-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  4. Embedded Active Vision System Based on an FPGA Architecture

    Directory of Open Access Journals (Sweden)

    Pierre Chalimbaud

    2006-12-01

    Full Text Available In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.

  5. Intersection Recognition and Guide-Path Selection for a Vision-Based AGV in a Bidirectional Flow Network

    Directory of Open Access Journals (Sweden)

    Wu Xing

    2014-03-01

    Full Text Available Vision recognition and RFID perception are used to develop a smart AGV travelling on fixed paths while retaining low-cost, simplicity and reliability. Visible landmarks can describe features of shapes and geometric dimensions of lines and intersections, and RFID tags can directly record global locations on pathways and the local topological relations of crossroads. A topological map is convenient for building and editing without the need for accurate poses when establishing a priori knowledge of a workplace. To obtain the flexibility of bidirectional movement along guide-paths, a camera placed in the centre of the AGV looks downward vertically at landmarks on the floor. A small visual field presents many difficulties for vision guidance, especially for real-time, correct and reliable recognition of multi-branch crossroads. First, the region projection and contour scanning methods are both used to extract the features of shapes. Then LDA is used to reduce the number of the features' dimensions. Third, a hierarchical SVM classifier is proposed to classify their multi-branch patterns once the features of the shapes are complete. Our experiments in landmark recognition and navigation show that low-cost vision systems are insusceptible to visual noises, image breakages and floor changes, and a vision-based AGV can locate itself precisely on its paths, recognize different crossroads intelligently by verifying the conformance of vision and RFID information, and select its next pathway efficiently in a bidirectional flow network.

  6. A Novel Metric Online Monocular SLAM Approach for Indoor Applications

    Directory of Open Access Journals (Sweden)

    Yongfei Li

    2016-01-01

    Full Text Available Monocular SLAM has attracted more attention recently due to its flexibility and being economic. In this paper, a novel metric online direct monocular SLAM approach is proposed, which can obtain the metric reconstruction of the scene. In the proposed approach, a chessboard is utilized to provide initial depth map and scale correction information during the SLAM process. The involved chessboard provides the absolute scale of scene, and it is seen as a bridge between the camera visual coordinate and the world coordinate. The scene is reconstructed as a series of key frames with their poses and correlative semidense depth maps, using a highly accurate pose estimation achieved by direct grid point-based alignment. The estimated pose is coupled with depth map estimation calculated by filtering over a large number of pixelwise small-baseline stereo comparisons. In addition, this paper formulates the scale-drift model among key frames and the calibration chessboard is used to correct the accumulated pose error. At the end of this paper, several indoor experiments are conducted. The results suggest that the proposed approach is able to achieve higher reconstruction accuracy when compared with the traditional LSD-SLAM approach. And the approach can also run in real time on a commonly used computer.

  7. Design Issues for MEMS-Based Pedestrian Inertial Navigation Systems

    Directory of Open Access Journals (Sweden)

    P. S. Marinushkin

    2015-01-01

    Full Text Available The paper describes design issues for MEMS-based pedestrian inertial navigation systems. By now the algorithms to estimate navigation parameters for strap-down inertial navigation systems on the basis of plural observations have been already well developed. At the same time mathematical and software processing of information in the case of pedestrian inertial navigation systems has its specificity, due to the peculiarities of their functioning and exploitation. Therefore, there is an urgent task to enhance existing fusion algorithms for use in pedestrian navigation systems. For this purpose the article analyzes the characteristics of the hardware composition and configuration of existing systems of this class. The paper shows advantages of various technical solutions. Relying on their main features it justifies a choice of the navigation system architecture and hardware composition enabling improvement of the estimation accuracy of user position as compared to the systems using only inertial sensors. The next point concerns the development of algorithms for complex processing of heterogeneous information. To increase an accuracy of the free running pedestrian inertial navigation system we propose an adaptive algorithm for joint processing of heterogeneous information based on the fusion of inertial info rmation with magnetometer measurements using EKF approach. Modeling of the algorithm was carried out using a specially developed functional prototype of pedestrian inertial navigation system, implemented as a hardware/software complex in Matlab environment. The functional prototype tests of the developed system demonstrated an improvement of the navigation parameters estimation compared to the systems based on inertial sensors only. It enables to draw a conclusion that the synthesized algorithm provides satisfactory accuracy for calculating the trajectory of motion even when using low-grade inertial MEMS sensors. The developed algorithm can be

  8. Panoramic stereo sphere vision

    Science.gov (United States)

    Feng, Weijia; Zhang, Baofeng; Röning, Juha; Zong, Xiaoning; Yi, Tian

    2013-01-01

    Conventional stereo vision systems have a small field of view (FOV) which limits their usefulness for certain applications. While panorama vision is able to "see" in all directions of the observation space, scene depth information is missed because of the mapping from 3D reference coordinates to 2D panoramic image. In this paper, we present an innovative vision system which builds by a special combined fish-eye lenses module, and is capable of producing 3D coordinate information from the whole global observation space and acquiring no blind area 360°×360° panoramic image simultaneously just using single vision equipment with one time static shooting. It is called Panoramic Stereo Sphere Vision (PSSV). We proposed the geometric model, mathematic model and parameters calibration method in this paper. Specifically, video surveillance, robotic autonomous navigation, virtual reality, driving assistance, multiple maneuvering target tracking, automatic mapping of environments and attitude estimation are some of the applications which will benefit from PSSV.

  9. Distributed Monocular SLAM for Indoor Map Building

    Directory of Open Access Journals (Sweden)

    Ruwan Egodagamage

    2017-01-01

    Full Text Available Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps, thus requiring a distributed computational framework. Each agent can generate its own local map, which can then be combined into a map covering a larger area. By doing so, they can cover a given environment faster than a single agent. Furthermore, they can interact with each other in the same environment, making this framework more practical, especially for collaborative applications such as augmented reality. One of the main challenges of distributed SLAM is identifying overlapping maps, especially when relative starting positions of agents are unknown. In this paper, we are proposing a system having multiple monocular agents, with unknown relative starting positions, which generates a semidense global map of the environment.

  10. Real-Time Vehicle Speed Estimation Based on License Plate Tracking in Monocular Video Sequences

    Directory of Open Access Journals (Sweden)

    Aleksej MAKAROV

    2016-02-01

    Full Text Available A method of estimating the vehicle speed from images obtained by a fixed over-the-road monocular camera is presented. The method is based on detecting and tracking vehicle license plates. The contrast between the license plate and its surroundings is enhanced using infrared light emitting diodes and infrared camera filters. A range of the license plate height values is assumed a priori. The camera vertical angle of view is measured prior to installation. The camera tilt is continuously measured by a micro-electromechanical sensor. The distance of the license plate from the camera is theoretically derived in terms of its pixel coordinates. Inaccuracies due to the frame rate drift, to the tilt and the angle of view measurement errors, to edge pixel detection and to a coarse assumption of the vehicle license plate height are analyzed and theoretically formulated. The resulting system is computationally efficient, inexpensive and easy to install and maintain along with the existing ALPR cameras.

  11. Reconfigurable On-Board Vision Processing for Small Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    James K. Archibald

    2006-12-01

    Full Text Available This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision gives increased autonomous capability, but it requires substantial computing power that is difficult to provide given the severe constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D map of the environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver high levels of performance, especially in embedded settings where space and power are significant concerns.

  12. Reconfigurable On-Board Vision Processing for Small Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Fife WadeS

    2007-01-01

    Full Text Available This paper addresses the challenge of supporting real-time vision processing on-board small autonomous vehicles. Local vision gives increased autonomous capability, but it requires substantial computing power that is difficult to provide given the severe constraints of small size and battery-powered operation. We describe a custom FPGA-based circuit board designed to support research in the development of algorithms for image-directed navigation and control. We show that the FPGA approach supports real-time vision algorithms by describing the implementation of an algorithm to construct a three-dimensional (3D map of the environment surrounding a small mobile robot. We show that FPGAs are well suited for systems that must be flexible and deliver high levels of performance, especially in embedded settings where space and power are significant concerns.

  13. Plenoptic Imager for Automated Surface Navigation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Autonomous and semi-autonomous robotic systems require information about their surroundings in order to navigate properly. A video camera machine vision system can...

  14. Monocular and binocular development in children with albinism, infantile nystagmus syndrome, and normal vision

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.

    2013-01-01

    Abstract Background/aims: To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Methods: Interocular acuity

  15. Monocular deprivation of Fourier phase information boosts the deprived eye's dominance during interocular competition but not interocular phase combination.

    Science.gov (United States)

    Bai, Jianying; Dong, Xue; He, Sheng; Bao, Min

    2017-06-03

    Ocular dominance has been extensively studied, often with the goal to understand neuroplasticity, which is a key characteristic within the critical period. Recent work on monocular deprivation, however, demonstrates residual neuroplasticity in the adult visual cortex. After deprivation of patterned inputs by monocular patching, the patched eye becomes more dominant. Since patching blocks both the Fourier amplitude and phase information of the input image, it remains unclear whether deprivation of the Fourier phase information alone is able to reshape eye dominance. Here, for the first time, we show that removing of the phase regularity without changing the amplitude spectra of the input image induced a shift of eye dominance toward the deprived eye, but only if the eye dominance was measured with a binocular rivalry task rather than an interocular phase combination task. These different results indicate that the two measurements are supported by different mechanisms. Phase integration requires the fusion of monocular images. The fused percept highly relies on the weights of the phase-sensitive monocular neurons that respond to the two monocular images. However, binocular rivalry reflects the result of direct interocular competition that strongly weights the contour information transmitted along each monocular pathway. Monocular phase deprivation may not change the weights in the integration (fusion) mechanism much, but alters the balance in the rivalry (competition) mechanism. Our work suggests that ocular dominance plasticity may occur at different stages of visual processing, and that homeostatic compensation also occurs for the lack of phase regularity in natural scenes. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Vision based guidance and flight control in problems of aerial tracking

    Science.gov (United States)

    Stepanyan, Vahram

    The use of visual sensors in providing the necessary information for the autonomous guidance and navigation of the unmanned-air vehicles (UAV) or micro-air vehicles (MAV) applications is inspired by biological systems and is motivated first of all by the reduction of the navigational sensor cost. Also, visual sensors can be more advantageous in military operations since they are difficult to detect. However, the design of a reliable guidance, navigation and control system for aerial vehicles based only on visual information has many unsolved problems, ranging from hardware/software development to pure control-theoretical issues, which are even more complicated when applied to the tracking of maneuvering unknown targets. This dissertation describes guidance law design and implementation algorithms for autonomous tracking of a flying target, when the information about the target's current position is obtained via a monocular camera mounted on the tracking UAV (follower). The visual information is related to the target's relative position in the follower's body frame via the target's apparent size, which is assumed to be constant, but otherwise unknown to the follower. The formulation of the relative dynamics in the inertial frame requires the knowledge of the follower's orientation angles, which are assumed to be known. No information is assumed to be available about the target's dynamics. The follower's objective is to maintain a desired relative position irrespective of the target's motion. Two types of guidance laws are designed and implemented in the dissertation. The first one is a smooth guidance law that guarantees asymptotic tracking of a target, the velocity of which is viewed as a time-varying disturbance, the change in magnitude of which has a bounded integral. The second one is a smooth approximation of a discontinuous guidance law that guarantees bounded tracking with adjustable bounds when the target's acceleration is viewed as a bounded but otherwise

  17. Effect of monocular deprivation on rabbit neural retinal cell densities

    Directory of Open Access Journals (Sweden)

    Philip Maseghe Mwachaka

    2015-01-01

    Conclusion: In this rabbit model, monocular deprivation resulted in activity-dependent changes in cell densities of the neural retina in favour of the non-deprived eye along with reduced cell densities in the deprived eye.

  18. Monocular and binocular development in children with albinism, infantile nystagmus syndrome and normal vision

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.

    2013-01-01

    Background/aims: To compare interocular acuity differences, crowding ratios, and binocular summation ratios in 4- to 8-year-old children with albinism (n = 16), children with infantile nystagmus syndrome (n = 10), and children with normal vision (n = 72). Methods: Interocular acuity differences and

  19. Monocular-Based 6-Degree of Freedom Pose Estimation Technology for Robotic Intelligent Grasping Systems

    Directory of Open Access Journals (Sweden)

    Tao Liu

    2017-02-01

    Full Text Available Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments.

  20. Ego-motion based on EM for bionic navigation

    Science.gov (United States)

    Yue, Xiaofeng; Wang, L. J.; Liu, J. G.

    2015-12-01

    Researches have proved that flying insects such as bees can achieve efficient and robust flight control, and biologists have explored some biomimetic principles regarding how they control flight. Based on those basic studies and principles acquired from the flying insects, this paper proposes a different solution of recovering ego-motion for low level navigation. Firstly, a new type of entropy flow is provided to calculate the motion parameters. Secondly, EKF, which has been used for navigation for some years to correct accumulated error, and estimation-Maximization, which is always used to estimate parameters, are put together to determine the ego-motion estimation of aerial vehicles. Numerical simulation on MATLAB has proved that this navigation system provides more accurate position and smaller mean absolute error than pure optical flow navigation. This paper has done pioneering work in bionic mechanism to space navigation.

  1. Embedded active vision system based on an FPGA architecture

    OpenAIRE

    Chalimbaud , Pierre; Berry , François

    2006-01-01

    International audience; In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision) is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks,...

  2. Neuroimaging of amblyopia and binocular vision: a review.

    Science.gov (United States)

    Joly, Olivier; Frankó, Edit

    2014-01-01

    Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia). Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarize the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence shows that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterize the brain response changes associated with these treatments and help devise them.

  3. Simulation-based camera navigation training in laparoscopy-a randomized trial

    DEFF Research Database (Denmark)

    Nilsson, Cecilia; Sørensen, Jette Led; Konge, Lars

    2017-01-01

    patient safety. The objectives of this trial were to examine how to train laparoscopic camera navigation and to explore the transfer of skills to the operating room. MATERIALS AND METHODS: A randomized, single-center superiority trial with three groups: The first group practiced simulation-based camera...... navigation tasks (camera group), the second group practiced performing a simulation-based cholecystectomy (procedure group), and the third group received no training (control group). Participants were surgical novices without prior laparoscopic experience. The primary outcome was assessment of camera.......033), had a higher score. CONCLUSIONS: Simulation-based training improves the technical skills required for camera navigation, regardless of practicing camera navigation or the procedure itself. Transfer to the clinical setting could, however, not be demonstrated. The control group demonstrated higher...

  4. Analysis the macular ganglion cell complex thickness in monocular strabismic amblyopia patients by Fourier-domain OCT

    Directory of Open Access Journals (Sweden)

    Hong-Wei Deng

    2014-11-01

    Full Text Available AIM: To detect the macular ganglion cell complex thickness in monocular strabismus amblyopia patients, in order to explore the relationship between the degree of amblyopia and retinal ganglion cell complex thickness, and found out whether there is abnormal macular ganglion cell structure in strabismic amblyopia. METHODS: Using a fourier-domain optical coherence tomography(FD-OCTinstrument iVue®(Optovue Inc, Fremont, CA, Macular ganglion cell complex(mGCCthickness was measured and statistical the relation rate with the best vision acuity correction was compared Gman among 26 patients(52 eyesincluded in this study. RESULTS: The mean thickness of the mGCC in macular was investigated into three parts: centrial, inner circle(3mmand outer circle(6mm. The mean thicknesses of mGCC in central, inner and outer circle was 50.74±21.51μm, 101.4±8.51μm, 114.2±9.455μm in the strabismic amblyopia eyes(SAE, and 43.79±11.92μm,92.47±25.01μm, 113.3±12.88μm in the contralateral sound eyes(CSErespectively. There was no statistically significant difference among the eyes(P>0.05. But the best corrected vision acuity had a good correlation rate between mGcc thicknesses, which was better relative for the lower part than the upper part.CONCLUSION:There is a relationship between the amblyopia vision acuity and the mGCC thickness. Although there has not statistically significant difference of the mGCC thickness compared with the SAE and CSE. To measure the macular center mGCC thickness in clinic may understand the degree of amblyopia.

  5. Satellite Imagery Assisted Road-Based Visual Navigation System

    Science.gov (United States)

    Volkova, A.; Gibbens, P. W.

    2016-06-01

    There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used

  6. Monocular and binocular visual impairment in the UK Biobank study: prevalence, associations and diagnoses.

    Science.gov (United States)

    McKibbin, Martin; Farragher, Tracey M; Shickle, Darren

    2018-01-01

    To determine the prevalence of, associations with and diagnoses leading to mild visual impairment or worse (logMAR >0.3) in middle-aged adults in the UK Biobank study. Prevalence estimates for monocular and binocular visual impairment were determined for the UK Biobank participants with fundus photographs and spectral domain optical coherence tomography images. Associations with socioeconomic, biometric, lifestyle and medical variables were investigated for cases with visual impairment and matched controls, using multinomial logistic regression models. Self-reported eye history and image grading results were used to identify the primary diagnoses leading to visual impairment for a sample of 25% of cases. For the 65 033 UK Biobank participants, aged 40-69 years and with fundus images, 6682 (10.3%) and 1677 (2.6%) had mild visual impairment or worse in one or both eyes, respectively. Increasing deprivation, age and ethnicity were independently associated with both monocular and binocular visual impairment. No primary diagnosis for the recorded level of visual impairment could be identified for 49.8% of eyes. The most common identifiable diagnoses leading to visual impairment were cataract, amblyopia, uncorrected refractive error and vitreoretinal interface abnormalities. The prevalence of visual impairment in the UK Biobank study cohort is lower than for population-based studies from other industrialised countries. Monocular and binocular visual impairment are associated with increasing deprivation, age and ethnicity. The UK Biobank dataset does not allow confident identification of the causes of visual impairment, and the results may not be applicable to the wider UK population.

  7. Monocular and binocular visual impairment in the UK Biobank study: prevalence, associations and diagnoses

    Science.gov (United States)

    Farragher, Tracey M; Shickle, Darren

    2018-01-01

    Objective To determine the prevalence of, associations with and diagnoses leading to mild visual impairment or worse (logMAR >0.3) in middle-aged adults in the UK Biobank study. Methods and analysis Prevalence estimates for monocular and binocular visual impairment were determined for the UK Biobank participants with fundus photographs and spectral domain optical coherence tomography images. Associations with socioeconomic, biometric, lifestyle and medical variables were investigated for cases with visual impairment and matched controls, using multinomial logistic regression models. Self-reported eye history and image grading results were used to identify the primary diagnoses leading to visual impairment for a sample of 25% of cases. Results For the 65 033 UK Biobank participants, aged 40–69 years and with fundus images, 6682 (10.3%) and 1677 (2.6%) had mild visual impairment or worse in one or both eyes, respectively. Increasing deprivation, age and ethnicity were independently associated with both monocular and binocular visual impairment. No primary diagnosis for the recorded level of visual impairment could be identified for 49.8% of eyes. The most common identifiable diagnoses leading to visual impairment were cataract, amblyopia, uncorrected refractive error and vitreoretinal interface abnormalities. Conclusions The prevalence of visual impairment in the UK Biobank study cohort is lower than for population-based studies from other industrialised countries. Monocular and binocular visual impairment are associated with increasing deprivation, age and ethnicity. The UK Biobank dataset does not allow confident identification of the causes of visual impairment, and the results may not be applicable to the wider UK population. PMID:29657974

  8. Combined measurement system for double shield tunnel boring machine guidance based on optical and visual methods.

    Science.gov (United States)

    Lin, Jiarui; Gao, Kai; Gao, Yang; Wang, Zheng

    2017-10-01

    In order to detect the position of the cutting shield at the head of a double shield tunnel boring machine (TBM) during the excavation, this paper develops a combined measurement system which is mainly composed of several optical feature points, a monocular vision sensor, a laser target sensor, and a total station. The different elements of the combined system are mounted on the TBM in suitable sequence, and the position of the cutting shield in the reference total station frame is determined by coordinate transformations. Subsequently, the structure of the feature points and matching technique for them are expounded, the position measurement method based on monocular vision is presented, and the calibration methods for the unknown relationships among different parts of the system are proposed. Finally, a set of experimental platforms to simulate the double shield TBM is established, and accuracy verification experiments are conducted. Experimental results show that the mean deviation of the system is 6.8 mm, which satisfies the requirements of double shield TBM guidance.

  9. Vision-based map building and trajectory planning to enable autonomous flight through urban environments

    Science.gov (United States)

    Watkins, Adam S.

    The desire to use Unmanned Air Vehicles (UAVs) in a variety of complex missions has motivated the need to increase the autonomous capabilities of these vehicles. This research presents autonomous vision-based mapping and trajectory planning strategies for a UAV navigating in an unknown urban environment. It is assumed that the vehicle's inertial position is unknown because GPS in unavailable due to environmental occlusions or jamming by hostile military assets. Therefore, the environment map is constructed from noisy sensor measurements taken at uncertain vehicle locations. Under these restrictions, map construction becomes a state estimation task known as the Simultaneous Localization and Mapping (SLAM) problem. Solutions to the SLAM problem endeavor to estimate the state of a vehicle relative to concurrently estimated environmental landmark locations. The presented work focuses specifically on SLAM for aircraft, denoted as airborne SLAM, where the vehicle is capable of six degree of freedom motion characterized by highly nonlinear equations of motion. The airborne SLAM problem is solved with a variety of filters based on the Rao-Blackwellized particle filter. Additionally, the environment is represented as a set of geometric primitives that are fit to the three-dimensional points reconstructed from gathered onboard imagery. The second half of this research builds on the mapping solution by addressing the problem of trajectory planning for optimal map construction. Optimality is defined in terms of maximizing environment coverage in minimum time. The planning process is decomposed into two phases of global navigation and local navigation. The global navigation strategy plans a coarse, collision-free path through the environment to a goal location that will take the vehicle to previously unexplored or incompletely viewed territory. The local navigation strategy plans detailed, collision-free paths within the currently sensed environment that maximize local coverage

  10. Computer vision based room interior design

    Science.gov (United States)

    Ahmad, Nasir; Hussain, Saddam; Ahmad, Kashif; Conci, Nicola

    2015-12-01

    This paper introduces a new application of computer vision. To the best of the author's knowledge, it is the first attempt to incorporate computer vision techniques into room interior designing. The computer vision based interior designing is achieved in two steps: object identification and color assignment. The image segmentation approach is used for the identification of the objects in the room and different color schemes are used for color assignment to these objects. The proposed approach is applied to simple as well as complex images from online sources. The proposed approach not only accelerated the process of interior designing but also made it very efficient by giving multiple alternatives.

  11. Vision-based control of the Manus using SIFT

    NARCIS (Netherlands)

    Liefhebber, F.; Sijs, J.

    2007-01-01

    The rehabilitation robot Manus is an assistive device for severely motor handicapped users. The executing of all day living tasks with the Manus, can be very complex and a vision-based controller can simplify this. The lack of existing vision-based controlled systems, is the poor reliability of the

  12. Distributed Monocular SLAM for Indoor Map Building

    OpenAIRE

    Ruwan Egodagamage; Mihran Tuceryan

    2017-01-01

    Utilization and generation of indoor maps are critical elements in accurate indoor tracking. Simultaneous Localization and Mapping (SLAM) is one of the main techniques for such map generation. In SLAM an agent generates a map of an unknown environment while estimating its location in it. Ubiquitous cameras lead to monocular visual SLAM, where a camera is the only sensing device for the SLAM process. In modern applications, multiple mobile agents may be involved in the generation of such maps,...

  13. New development in robot vision

    CERN Document Server

    Behal, Aman; Chung, Chi-Kit

    2015-01-01

    The field of robotic vision has advanced dramatically recently with the development of new range sensors.  Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related...

  14. A feasibility study of optical flow-based navigation during colonoscopy

    NARCIS (Netherlands)

    van der Stap, N.; Reilink, Rob; Misra, Sarthak; Broeders, Ivo Adriaan Maria Johannes; van der Heijden, Ferdinand

    In this study, it was shown that using the optical flow and the focus of expansion, obtained from the monocular camera at the beginning of a colonoscope, (semi-)automated steering of flexible endoscopes might become possible. This automation might help to increase colonoscopy efficiency, but is also

  15. Improving Mobility Performance in Low Vision With a Distance-Based Representation of the Visual Scene.

    Science.gov (United States)

    van Rheede, Joram J; Wilson, Iain R; Qian, Rose I; Downes, Susan M; Kennard, Christopher; Hicks, Stephen L

    2015-07-01

    Severe visual impairment can have a profound impact on personal independence through its effect on mobility. We investigated whether the mobility of people with vision low enough to be registered as blind could be improved by presenting the visual environment in a distance-based manner for easier detection of obstacles. We accomplished this by developing a pair of "residual vision glasses" (RVGs) that use a head-mounted depth camera and displays to present information about the distance of obstacles to the wearer as brightness, such that obstacles closer to the wearer are represented more brightly. We assessed the impact of the RVGs on the mobility performance of visually impaired participants during the completion of a set of obstacle courses. Participant position was monitored continuously, which enabled us to capture the temporal dynamics of mobility performance. This allowed us to find correlates of obstacle detection and hesitations in walking behavior, in addition to the more commonly used measures of trial completion time and number of collisions. All participants were able to use the smart glasses to navigate the course, and mobility performance improved for those visually impaired participants with the worst prior mobility performance. However, walking speed was slower and hesitations increased with the altered visual representation. A depth-based representation of the visual environment may offer low vision patients improvements in independent mobility. It is important for further work to explore whether practice can overcome the reductions in speed and increased hesitation that were observed in our trial.

  16. Wavefront Propagation and Fuzzy Based Autonomous Navigation

    Directory of Open Access Journals (Sweden)

    Adel Al-Jumaily

    2005-06-01

    Full Text Available Path planning and obstacle avoidance are the two major issues in any navigation system. Wavefront propagation algorithm, as a good path planner, can be used to determine an optimal path. Obstacle avoidance can be achieved using possibility theory. Combining these two functions enable a robot to autonomously navigate to its destination. This paper presents the approach and results in implementing an autonomous navigation system for an indoor mobile robot. The system developed is based on a laser sensor used to retrieve data to update a two dimensional world model of therobot environment. Waypoints in the path are incorporated into the obstacle avoidance. Features such as ageing of objects and smooth motion planning are implemented to enhance efficiency and also to cater for dynamic environments.

  17. Ultrasound-based tumor movement compensation during navigated laparoscopic liver interventions.

    Science.gov (United States)

    Shahin, Osama; Beširević, Armin; Kleemann, Markus; Schlaefer, Alexander

    2014-05-01

    Image-guided navigation aims to provide better orientation and accuracy in laparoscopic interventions. However, the ability of the navigation system to reflect anatomical changes and maintain high accuracy during the procedure is crucial. This is particularly challenging in soft organs such as the liver, where surgical manipulation causes significant tumor movements. We propose a fast approach to obtain an accurate estimation of the tumor position throughout the procedure. Initially, a three-dimensional (3D) ultrasound image is reconstructed and the tumor is segmented. During surgery, the position of the tumor is updated based on newly acquired tracked ultrasound images. The initial segmentation of the tumor is used to automatically detect the tumor and update its position in the navigation system. Two experiments were conducted. First, a controlled phantom motion using a robot was performed to validate the tracking accuracy. Second, a needle navigation scenario based on pseudotumors injected into ex vivo porcine liver was studied. In the robot-based evaluation, the approach estimated the target location with an accuracy of 0.4 ± 0.3 mm. The mean navigation error in the needle experiment was 1.2 ± 0.6 mm, and the algorithm compensated for tumor shifts up to 38 mm in an average time of 1 s. We demonstrated a navigation approach based on tracked laparoscopic ultrasound (LUS), and focused on the neighborhood of the tumor. Our experimental results indicate that this approach can be used to quickly and accurately compensate for tumor movements caused by surgical manipulation during laparoscopic interventions. The proposed approach has the advantage of being based on the routinely used LUS; however, it upgrades its functionality to estimate the tumor position in 3D. Hence, the approach is repeatable throughout surgery, and enables high navigation accuracy to be maintained.

  18. Stereo Vision Guiding for the Autonomous Landing of Fixed-Wing UAVs: A Saliency-Inspired Approach

    Directory of Open Access Journals (Sweden)

    Zhaowei Ma

    2016-03-01

    Full Text Available It is an important criterion for unmanned aerial vehicles (UAVs to land on the runway safely. This paper concentrates on stereo vision localization of a fixed-wing UAV's autonomous landing within global navigation satellite system (GNSS denied environments. A ground stereo vision guidance system imitating the human visual system (HVS is presented for the autonomous landing of fixed-wing UAVs. A saliency-inspired algorithm is presented and developed to detect flying UAV targets in captured sequential images. Furthermore, an extended Kalman filter (EKF based state estimation is employed to reduce localization errors caused by measurement errors of object detection and pan-tilt unit (PTU attitudes. Finally, stereo-vision-dataset-based experiments are conducted to verify the effectiveness of the proposed visual detection method and error correction algorithm. The compared results between the visual guidance approach and differential GPS-based approach indicate that the stereo vision system and detection method can achieve the better guiding effect.

  19. Bio-inspired vision

    International Nuclear Information System (INIS)

    Posch, C

    2012-01-01

    Nature still outperforms the most powerful computers in routine functions involving perception, sensing and actuation like vision, audition, and motion control, and is, most strikingly, orders of magnitude more energy-efficient than its artificial competitors. The reasons for the superior performance of biological systems are subject to diverse investigations, but it is clear that the form of hardware and the style of computation in nervous systems are fundamentally different from what is used in artificial synchronous information processing systems. Very generally speaking, biological neural systems rely on a large number of relatively simple, slow and unreliable processing elements and obtain performance and robustness from a massively parallel principle of operation and a high level of redundancy where the failure of single elements usually does not induce any observable system performance degradation. In the late 1980's, Carver Mead demonstrated that silicon VLSI technology can be employed in implementing ''neuromorphic'' circuits that mimic neural functions and fabricating building blocks that work like their biological role models. Neuromorphic systems, as the biological systems they model, are adaptive, fault-tolerant and scalable, and process information using energy-efficient, asynchronous, event-driven methods. In this paper, some basics of neuromorphic electronic engineering and its impact on recent developments in optical sensing and artificial vision are presented. It is demonstrated that bio-inspired vision systems have the potential to outperform conventional, frame-based vision acquisition and processing systems in many application fields and to establish new benchmarks in terms of redundancy suppression/data compression, dynamic range, temporal resolution and power efficiency to realize advanced functionality like 3D vision, object tracking, motor control, visual feedback loops, etc. in real-time. It is argued that future artificial vision systems

  20. A Navigation System for the Visually Impaired: A Fusion of Vision and Depth Sensor

    Science.gov (United States)

    Kanwal, Nadia; Bostanci, Erkan; Currie, Keith; Clark, Adrian F.

    2015-01-01

    For a number of years, scientists have been trying to develop aids that can make visually impaired people more independent and aware of their surroundings. Computer-based automatic navigation tools are one example of this, motivated by the increasing miniaturization of electronics and the improvement in processing power and sensing capabilities. This paper presents a complete navigation system based on low cost and physically unobtrusive sensors such as a camera and an infrared sensor. The system is based around corners and depth values from Kinect's infrared sensor. Obstacles are found in images from a camera using corner detection, while input from the depth sensor provides the corresponding distance. The combination is both efficient and robust. The system not only identifies hurdles but also suggests a safe path (if available) to the left or right side and tells the user to stop, move left, or move right. The system has been tested in real time by both blindfolded and blind people at different indoor and outdoor locations, demonstrating that it operates adequately. PMID:27057135

  1. Deep hierarchies in the primate visual cortex: what can we learn for computer vision?

    Science.gov (United States)

    Krüger, Norbert; Janssen, Peter; Kalkan, Sinan; Lappe, Markus; Leonardis, Ales; Piater, Justus; Rodríguez-Sánchez, Antonio J; Wiskott, Laurenz

    2013-08-01

    Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.

  2. Object Persistence Enhances Spatial Navigation: A Case Study in Smartphone Vision Science.

    Science.gov (United States)

    Liverence, Brandon M; Scholl, Brian J

    2015-07-01

    Violations of spatiotemporal continuity disrupt performance in many tasks involving attention and working memory, but experiments on this topic have been limited to the study of moment-by-moment on-line perception, typically assessed by passive monitoring tasks. We tested whether persisting object representations also serve as underlying units of longer-term memory and active spatial navigation, using a novel paradigm inspired by the visual interfaces common to many smartphones. Participants used key presses to navigate through simple visual environments consisting of grids of icons (depicting real-world objects), only one of which was visible at a time through a static virtual window. Participants found target icons faster when navigation involved persistence cues (via sliding animations) than when persistence was disrupted (e.g., via temporally matched fading animations), with all transitions inspired by smartphone interfaces. Moreover, this difference occurred even after explicit memorization of the relevant information, which demonstrates that object persistence enhances spatial navigation in an automatic and irresistible fashion. © The Author(s) 2015.

  3. GNSS-based receiver autonomous integrity monitoring for aircraft navigation

    NARCIS (Netherlands)

    Imparato, D.

    2016-01-01

    Nowadays, GNSS-based navigation is moving more and more to critical applications. Global Navigation Satellite Systems (GNSS), which in the past used to be represented by the American GPS and the Russian GLONASS are now growing in number and performance. The European systemGalileo and the Chinese

  4. Development of a Vision-Based Robotic Follower Vehicle

    Science.gov (United States)

    2009-02-01

    resultant blob . . . . . . . . . . 14 Figure 13: A sample image and the recognized keypoints found using the SIFT algorithm...Figure 12: An example of a spherical target and the resultant blob (taken from [66]). To track multi-coloured objects, rather than using just one...International Journal of Advanced Robotic Systems, 2(3), 245–250. [37] Zhou, J. and Clark, C. (2006), Autonomous fish tracking by ROV using Monocular

  5. A Hybrid Vision-Map Method for Urban Road Detection

    Directory of Open Access Journals (Sweden)

    Carlos Fernández

    2017-01-01

    Full Text Available A hybrid vision-map system is presented to solve the road detection problem in urban scenarios. The standardized use of machine learning techniques in classification problems has been merged with digital navigation map information to increase system robustness. The objective of this paper is to create a new environment perception method to detect the road in urban environments, fusing stereo vision with digital maps by detecting road appearance and road limits such as lane markings or curbs. Deep learning approaches make the system hard-coupled to the training set. Even though our approach is based on machine learning techniques, the features are calculated from different sources (GPS, map, curbs, etc., making our system less dependent on the training set.

  6. PERFORMANCE CHARACTERISTIC MEMS-BASED IMUs FOR UAVs NAVIGATION

    Directory of Open Access Journals (Sweden)

    H. A. Mohamed

    2015-08-01

    Full Text Available Accurate 3D reconstruction has become essential for non-traditional mapping applications such as urban planning, mining industry, environmental monitoring, navigation, surveillance, pipeline inspection, infrastructure monitoring, landslide hazard analysis, indoor localization, and military simulation. The needs of these applications cannot be satisfied by traditional mapping, which is based on dedicated data acquisition systems designed for mapping purposes. Recent advances in hardware and software development have made it possible to conduct accurate 3D mapping without using costly and high-end data acquisition systems. Low-cost digital cameras, laser scanners, and navigation systems can provide accurate mapping if they are properly integrated at the hardware and software levels. Unmanned Aerial Vehicles (UAVs are emerging as a mobile mapping platform that can provide additional economical and practical advantages. However, such economical and practical requirements need navigation systems that can provide uninterrupted navigation solution. Hence, testing the performance characteristics of Micro-Electro-Mechanical Systems (MEMS or low cost navigation sensors for various UAV applications is important research. This work focuses on studying the performance characteristics under different manoeuvres using inertial measurements integrated with single point positioning, Real-Time-Kinematic (RTK, and additional navigational aiding sensors. Furthermore, the performance of the inertial sensors is tested during Global Positioning System (GPS signal outage.

  7. Performance Characteristic Mems-Based IMUs for UAVs Navigation

    Science.gov (United States)

    Mohamed, H. A.; Hansen, J. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, A. B.

    2015-08-01

    Accurate 3D reconstruction has become essential for non-traditional mapping applications such as urban planning, mining industry, environmental monitoring, navigation, surveillance, pipeline inspection, infrastructure monitoring, landslide hazard analysis, indoor localization, and military simulation. The needs of these applications cannot be satisfied by traditional mapping, which is based on dedicated data acquisition systems designed for mapping purposes. Recent advances in hardware and software development have made it possible to conduct accurate 3D mapping without using costly and high-end data acquisition systems. Low-cost digital cameras, laser scanners, and navigation systems can provide accurate mapping if they are properly integrated at the hardware and software levels. Unmanned Aerial Vehicles (UAVs) are emerging as a mobile mapping platform that can provide additional economical and practical advantages. However, such economical and practical requirements need navigation systems that can provide uninterrupted navigation solution. Hence, testing the performance characteristics of Micro-Electro-Mechanical Systems (MEMS) or low cost navigation sensors for various UAV applications is important research. This work focuses on studying the performance characteristics under different manoeuvres using inertial measurements integrated with single point positioning, Real-Time-Kinematic (RTK), and additional navigational aiding sensors. Furthermore, the performance of the inertial sensors is tested during Global Positioning System (GPS) signal outage.

  8. Vision-based coaching: Optimizing resources for leader development

    Directory of Open Access Journals (Sweden)

    Angela M. Passarelli

    2015-04-01

    Full Text Available Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader’s development fail to leverage the developmental benefits of the individual’s personal vision. Drawing on Intentional Change Theory, this article postulates that coaching interactions that emphasize a leader’s personal vision (future aspirations and core identity evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader’s identity, increased vitality, activation of learning goals, and a promotion-orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed.

  9. Vision-based coaching: optimizing resources for leader development

    Science.gov (United States)

    Passarelli, Angela M.

    2015-01-01

    Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader’s development fail to leverage the benefits of the individual’s personal vision. Drawing on intentional change theory, this article postulates that coaching interactions that emphasize a leader’s personal vision (future aspirations and core identity) evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader’s identity, increased vitality, activation of learning goals, and a promotion–orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed. PMID:25926803

  10. Neuroimaging of amblyopia and binocular vision: a review

    Directory of Open Access Journals (Sweden)

    Olivier eJoly

    2014-08-01

    Full Text Available Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia. Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarise the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging (fMRI. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence show that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterise the brain response changes associated with these treatments and help devise them.

  11. Utilizing Robot Operating System (ROS) in Robot Vision and Control

    Science.gov (United States)

    2015-09-01

    Palmer, “Development of a navigation system for semi-autonomous operation of wheelchairs,” in Proc. of the 8th IEEE/ASME Int. Conf. on Mechatronic ...and Embedded Systems and Applications, Suzhou, China, 2012, pp. 257-262. [30] G. Grisetti, C. Stachniss, and W. Burgard, “Improving grid-based SLAM...OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL by Joshua S. Lum September 2015 Thesis Advisor: Xiaoping Yun Co-Advisor: Zac Staples

  12. Multi-spectrum-based enhanced synthetic vision system for aircraft DVE operations

    Science.gov (United States)

    Kashyap, Sudesh K.; Naidu, V. P. S.; Shanthakumar, N.

    2016-04-01

    This paper focus on R&D being carried out at CSIR-NAL on Enhanced Synthetic Vision System (ESVS) for Indian regional transport aircraft to enhance all weather operational capabilities with safety and pilot Situation Awareness (SA) improvements. Flight simulator has been developed to study ESVS related technologies and to develop ESVS operational concepts for all weather approach and landing and to provide quantitative and qualitative information that could be used to develop criteria for all-weather approach and landing at regional airports in India. Enhanced Vision System (EVS) hardware prototype with long wave Infrared sensor and low light CMOS camera is used to carry out few field trials on ground vehicle at airport runway at different visibility conditions. Data acquisition and playback system has been developed to capture EVS sensor data (image) in time synch with test vehicle inertial navigation data during EVS field experiments and to playback the experimental data on ESVS flight simulator for ESVS research and concept studies. Efforts are on to conduct EVS flight experiments on CSIR-NAL research aircraft HANSA in Degraded Visual Environment (DVE).

  13. Screen Miniatures as Icons for Backward Navigation in Content-Based Software.

    Science.gov (United States)

    Boling, Elizabeth; Ma, Guoping; Tao, Chia-Wen; Askun, Cengiz; Green, Tim; Frick, Theodore; Schaumburg, Heike

    Users of content-based software programs, including hypertexts and instructional multimedia, rely on the navigation functions provided by the designers of those program. Typical navigation schemes use abstract symbols (arrows) to label basic navigational functions like moving forward or backward through screen displays. In a previous study, the…

  14. Viewing geometry determines the contribution of binocular vision to the online control of grasping.

    Science.gov (United States)

    Keefe, Bruce D; Watt, Simon J

    2017-12-01

    Binocular vision is often assumed to make a specific, critical contribution to online visual control of grasping by providing precise information about the separation between digits and object. This account overlooks the 'viewing geometry' typically encountered in grasping, however. Separation of hand and object is rarely aligned precisely with the line of sight (the visual depth dimension), and analysis of the raw signals suggests that, for most other viewing angles, binocular feedback is less precise than monocular feedback. Thus, online grasp control relying selectively on binocular feedback would not be robust to natural changes in viewing geometry. Alternatively, sensory integration theory suggests that different signals contribute according to their relative precision, in which case the role of binocular feedback should depend on viewing geometry, rather than being 'hard-wired'. We manipulated viewing geometry, and assessed the role of binocular feedback by measuring the effects on grasping of occluding one eye at movement onset. Loss of binocular feedback resulted in a significantly less extended final slow-movement phase when hand and object were separated primarily in the frontoparallel plane (where binocular information is relatively imprecise), compared to when they were separated primarily along the line of sight (where binocular information is relatively precise). Consistent with sensory integration theory, this suggests the role of binocular (and monocular) vision in online grasp control is not a fixed, 'architectural' property of the visuo-motor system, but arises instead from the interaction of viewer and situation, allowing robust online control across natural variations in viewing geometry.

  15. Vision in avian emberizid foragers: maximizing both binocular vision and fronto-lateral visual acuity.

    Science.gov (United States)

    Moore, Bret A; Pita, Diana; Tyrrell, Luke P; Fernández-Juricic, Esteban

    2015-05-01

    Avian species vary in their visual system configuration, but previous studies have often compared single visual traits between two to three distantly related species. However, birds use different visual dimensions that cannot be maximized simultaneously to meet different perceptual demands, potentially leading to trade-offs between visual traits. We studied the degree of inter-specific variation in multiple visual traits related to foraging and anti-predator behaviors in nine species of closely related emberizid sparrows, controlling for phylogenetic effects. Emberizid sparrows maximize binocular vision, even seeing their bill tips in some eye positions, which may enhance the detection of prey and facilitate food handling. Sparrows have a single retinal center of acute vision (i.e. fovea) projecting fronto-laterally (but not into the binocular field). The foveal projection close to the edge of the binocular field may shorten the time to gather and process both monocular and binocular visual information from the foraging substrate. Contrary to previous work, we found that species with larger visual fields had higher visual acuity, which may compensate for larger blind spots (i.e. pectens) above the center of acute vision, enhancing predator detection. Finally, species with a steeper change in ganglion cell density across the retina had higher eye movement amplitude, probably due to a more pronounced reduction in visual resolution away from the fovea, which would need to be moved around more frequently. The visual configuration of emberizid passive prey foragers is substantially different from that of previously studied avian groups (e.g. sit-and-wait and tactile foragers). © 2015. Published by The Company of Biologists Ltd.

  16. Three dimensional monocular human motion analysis in end-effector space

    DEFF Research Database (Denmark)

    Hauberg, Søren; Lapuyade, Jerome; Engell-Nørregård, Morten Pol

    2009-01-01

    In this paper, we present a novel approach to three dimensional human motion estimation from monocular video data. We employ a particle filter to perform the motion estimation. The novelty of the method lies in the choice of state space for the particle filter. Using a non-linear inverse kinemati...

  17. Distributed Ship Navigation Control System Based on Dual Network

    Science.gov (United States)

    Yao, Ying; Lv, Wu

    2017-10-01

    Navigation system is very important for ship’s normal running. There are a lot of devices and sensors in the navigation system to guarantee ship’s regular work. In the past, these devices and sensors were usually connected via CAN bus for high performance and reliability. However, as the development of related devices and sensors, the navigation system also needs the ability of high information throughput and remote data sharing. To meet these new requirements, we propose the communication method based on dual network which contains CAN bus and industrial Ethernet. Also, we import multiple distributed control terminals with cooperative strategy based on the idea of synchronizing the status by multicasting UDP message contained operation timestamp to make the system more efficient and reliable.

  18. Value and Vision-based Methodology in Integrated Design

    DEFF Research Database (Denmark)

    Tollestrup, Christian

    on empirical data from workshop where the Value and Vision-based methodology has been taught. The research approach chosen for this investigation is Action Research, where the researcher plays an active role in generating the data and gains a deeper understanding of the investigated phenomena. The result...... of this thesis is the value transformation from an explicit set of values to a product concept using a vision based concept development methodology based on the Pyramid Model (Lerdahl, 2001) in a design team context. The aim of this thesis is to examine how the process of value transformation is occurring within...... is divided in three; the systemic unfolding of the Value and Vision-based methodology, the structured presentation of practical implementation of the methodology and finally the analysis and conclusion regarding the value transformation, phenomena and learning aspects of the methodology....

  19. Synthetic vision systems: operational considerations simulation experiment

    Science.gov (United States)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-04-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents / accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  20. Synthetic Vision Systems - Operational Considerations Simulation Experiment

    Science.gov (United States)

    Kramer, Lynda J.; Williams, Steven P.; Bailey, Randall E.; Glaab, Louis J.

    2007-01-01

    Synthetic vision is a computer-generated image of the external scene topography that is generated from aircraft attitude, high-precision navigation information, and data of the terrain, obstacles, cultural features, and other required flight information. A synthetic vision system (SVS) enhances this basic functionality with real-time integrity to ensure the validity of the databases, perform obstacle detection and independent navigation accuracy verification, and provide traffic surveillance. Over the last five years, NASA and its industry partners have developed and deployed SVS technologies for commercial, business, and general aviation aircraft which have been shown to provide significant improvements in terrain awareness and reductions in the potential for Controlled-Flight-Into-Terrain incidents/accidents compared to current generation cockpit technologies. It has been hypothesized that SVS displays can greatly improve the safety and operational flexibility of flight in Instrument Meteorological Conditions (IMC) to a level comparable to clear-day Visual Meteorological Conditions (VMC), regardless of actual weather conditions or time of day. An experiment was conducted to evaluate SVS and SVS-related technologies as well as the influence of where the information is provided to the pilot (e.g., on a Head-Up or Head-Down Display) for consideration in defining landing minima based upon aircraft and airport equipage. The "operational considerations" evaluated under this effort included reduced visibility, decision altitudes, and airport equipage requirements, such as approach lighting systems, for SVS-equipped aircraft. Subjective results from the present study suggest that synthetic vision imagery on both head-up and head-down displays may offer benefits in situation awareness; workload; and approach and landing performance in the visibility levels, approach lighting systems, and decision altitudes tested.

  1. Audible vision for the blind and visually impaired in indoor open spaces.

    Science.gov (United States)

    Yu, Xunyi; Ganz, Aura

    2012-01-01

    In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people.

  2. Evidence-based medicine: the value of vision screening.

    Science.gov (United States)

    Beauchamp, George R; Ellepola, Chalani; Beauchamp, Cynthia L

    2010-01-01

    To review the literature for evidence-based medicine (EBM), to assess the evidence for effectiveness of vision screening, and to propose moving toward value-based medicine (VBM) as a preferred basis for comparative effectiveness research. Literature based evidence is applied to five core questions concerning vision screening: (1) Is vision valuable (an inherent good)?; (2) Is screening effective (finding amblyopia)?; (3) What are the costs of screening?; (4) Is treatment effective?; and (5) Is amblyopia detection beneficial? Based on EBM literature and clinical experience, the answers to the five questions are: (1) yes; (2) based on literature, not definitively so; (3) relatively inexpensive, although some claim benefits for more expensive options such as mandatory exams; (4) yes, for compliant care, although treatment processes may have negative aspects such as "bullying"; and (5) economic productive values are likely very high, with returns of investment on the order of 10:1, while human value returns need further elucidation. Additional evidence is required to ascertain the degree to which vision screening is effective. The processes of screening are multiple, sequential, and complicated. The disease is complex, and good visual outcomes require compliance. The value of outcomes is appropriately analyzed in clinical, human, and economic terms.

  3. Ground-Based Global Navigation Satellite System GLONASS (GLObal NAvigation Satellite System) Combined Broadcast Ephemeris Data (daily files) from NASA CDDIS

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset consists of ground-based Global Navigation Satellite System (GNSS) GLONASS Combined Broadcast Ephemeris Data (daily files of all distinct navigation...

  4. The use of x-ray pulsar-based navigation method for interplanetary flight

    Science.gov (United States)

    Yang, Bo; Guo, Xingcan; Yang, Yong

    2009-07-01

    As interplanetary missions are increasingly complex, the existing unique mature interplanetary navigation method mainly based on radiometric tracking techniques of Deep Space Network can not meet the rising demands of autonomous real-time navigation. This paper studied the applications for interplanetary flights of a new navigation technology under rapid development-the X-ray pulsar-based navigation for spacecraft (XPNAV), and valued its performance with a computer simulation. The XPNAV is an excellent autonomous real-time navigation method, and can provide comprehensive navigation information, including position, velocity, attitude, attitude rate and time. In the paper the fundamental principles and time transformation of the XPNAV were analyzed, and then the Delta-correction XPNAV blending the vehicles' trajectory dynamics with the pulse time-of-arrival differences at nominal and estimated spacecraft locations within an Unscented Kalman Filter (UKF) was discussed with a background mission of Mars Pathfinder during the heliocentric transferring orbit. The XPNAV has an intractable problem of integer pulse phase cycle ambiguities similar to the GPS carrier phase navigation. This article innovatively proposed the non-ambiguity assumption approach based on an analysis of the search space array method to resolve pulse phase cycle ambiguities between the nominal position and estimated position of the spacecraft. The simulation results show that the search space array method are computationally intensive and require long processing time when the position errors are large, and the non-ambiguity assumption method can solve ambiguity problem quickly and reliably. It is deemed that autonomous real-time integrated navigation system of the XPNAV blending with DSN, celestial navigation, inertial navigation and so on will be the development direction of interplanetary flight navigation system in the future.

  5. Interaction Between Hippocampus and Cerebellum Crus I in Sequence-Based but not Place-Based Navigation

    Science.gov (United States)

    Iglói, Kinga; Doeller, Christian F.; Paradis, Anne-Lise; Benchenane, Karim; Berthoz, Alain; Burgess, Neil; Rondi-Reig, Laure

    2015-01-01

    To examine the cerebellar contribution to human spatial navigation we used functional magnetic resonance imaging and virtual reality. Our findings show that the sensory-motor requirements of navigation induce activity in cerebellar lobules and cortical areas known to be involved in the motor loop and vestibular processing. By contrast, cognitive aspects of navigation mainly induce activity in a different cerebellar lobule (VIIA Crus I). Our results demonstrate a functional link between cerebellum and hippocampus in humans and identify specific functional circuits linking lobule VIIA Crus I of the cerebellum to medial parietal, medial prefrontal, and hippocampal cortices in nonmotor aspects of navigation. They further suggest that Crus I belongs to 2 nonmotor loops, involved in different strategies: place-based navigation is supported by coherent activity between left cerebellar lobule VIIA Crus I and medial parietal cortex along with right hippocampus activity, while sequence-based navigation is supported by coherent activity between right lobule VIIA Crus I, medial prefrontal cortex, and left hippocampus. These results highlight the prominent role of the human cerebellum in both motor and cognitive aspects of navigation, and specify the cortico-cerebellar circuits by which it acts depending on the requirements of the task. PMID:24947462

  6. Functional vision is improved in the majority of patients treated with external beam radiotherapy for choroid metastases: a multivariate analysis of 188 patients

    International Nuclear Information System (INIS)

    Rudoler, Shari B.; Shields, Carol L.; Corn, Benjamin W.; De Potter, Patrick; Hyslop, Terry; Curran, Walter J.; Shields, Jerry A.

    1996-01-01

    Purpose: Metastatic deposits are the most common intraocular malignancies, and their incidence may be as high as 4-12% in patients with solid tumors. We evaluated the efficacy of external beam radiotherapy (EBRT) in the palliation of posterior uveal metastases in terms of functional vision, tumor response, and globe preservation. Pre-radiotherapy tumor and patient characteristics which correlate best with vision restoration and preservation were identified. Patients and Methods: 483 consecutive patients (pts) (578 eyes) were diagnosed with intraocular metastatic disease between 1972-1995. Of these, 233 eyes (188 pts) had lesions of the posterior uveal tract and received EBRT. Pts with metastatic deposits from solid tumors were selected for analysis. Best corrected visual acuity (VA) was documented pre- and post-EBRT in 155 eyes. Visual function was 'excellent' if VA≤ (20(50)); 'navigational' if (20(60))-(20(200)); and 'legally blind' if ≥ (20(400)). Most patients received 30.0-40.0 Gy in 2.0-3.0 Gy fractions to the posterior or entire globe. Median follow up time was 5.8 mo (0.7-170.0) from the start of EBRT. Results: 57% ((89(155))) of all evaluable eyes had improved visual function or maintained at least navigational vision following EBRT. Specifically, 43% maintained ((46(69))) or achieved ((21(86))) excellent vision, and 26% maintained ((15(39))) or achieved ((7(47))) navigational vision. 36% of blind eyes regained useful vision. 93% ((217(233))) experienced no clinical evidence of tumor progression and globe preservation rate was 98%. The following characteristics were predictive of improvement to or maintenance of excellent vision on univariate analysis: excellent vision (vs navigational or legally blind) prior to EBRT (p 0.001), age < 55 yrs (p = 0.004), Caucasian race (vs African-American/Hispanic) (p 0.003), duration of symptoms < 3.25 mo. (p 0.03), bilateral metastases (vs unilateral) (p = 0.02), tumor base diameter < 15 mm (p < 0.001), and tumor

  7. ARK-2: a mobile robot that navigates autonomously in an industrial environment

    International Nuclear Information System (INIS)

    Bains, N.; Nickerson, S.; Wilkes, D.

    1995-01-01

    ARK-2 is a robot that uses a vision system based on a camera and spot laser rangefinder mounted on a pan and tilt unit for navigation. This vision system recognizes known landmarks and computes its position relative to them, thus bounding the error in its position. The vision system is also used to find known gauges, given their approximate locations, and takes readings from them. 'Approximate' in this context means the same sort of accuracy that a human would need: 'down aisle 3 on the right' suffices. ARK-2 is also equipped with the FAD (Floor Anomaly Detector) which is based on the NRC (National Research Council of Canada) BIRIS (Bi-IRIS) sensor, and keeps ARK-2 from failing into open drains or trying to negotiate large cables or pipes on the floor. ARK-2 has also been equipped with a variety of application sensors for security and safety patrol applications. Radiation sensors are used to produce contour maps of radiation levels. In order to detect fires, environmental changes and intruders, ARK-2 is equipped with smoke, temperature, humidity and gas sensors, scanning ultraviolet and infrared detectors and a microwave motion detector. In order to support autonomous, untethered operation for hours at a time, ARK-2 also has onboard systems for power, sonar-based obstacle detection, computation and communications. The project uses a UNIX environment for software development, with the onboard SPARC processor appearing as just another workstation on the LAN. Software modules include the hardware drivers, path planning, navigation, emergency stop, obstacle mapping and status monitoring. ARK-2 may also be controlled from a ROBCAD simulation. (author)

  8. Vision-based autonomous grasping of unknown piled objects

    International Nuclear Information System (INIS)

    Johnson, R.K.

    1994-01-01

    Computer vision techniques have been used to develop a vision-based grasping capability for autonomously picking and placing unknown piled objects. This work is currently being applied to the problem of hazardous waste sorting in support of the Department of Energy's Mixed Waste Operations Program

  9. Fast detection and modeling of human-body parts from monocular video

    NARCIS (Netherlands)

    Lao, W.; Han, Jungong; With, de P.H.N.; Perales, F.J.; Fisher, R.B.

    2009-01-01

    This paper presents a novel and fast scheme to detect different body parts in human motion. Using monocular video sequences, trajectory estimation and body modeling of moving humans are combined in a co-operating processing architecture. More specifically, for every individual person, features of

  10. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    Science.gov (United States)

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  11. Gamifying Navigation in Location-Based Applications

    DEFF Research Database (Denmark)

    Nadarajah, Stephanie Githa; Overgaard, Benjamin Nicholas; Pedersen, Peder Walz

    2017-01-01

    Location-based games entertain players usually by interactions at points of interest (POIs). Navigation between POIs often involve the use of either a physical or digital map, not taking advantage of the opportunity available to engage users in activities between POIs. The paper presents riddle s...

  12. Navigation with a passive brain based interface

    NARCIS (Netherlands)

    Erp, J.B.F. van; Werkhoven, P.J.; Thurlings, M.E.; Brouwer, A.-M.

    2009-01-01

    In this paper, we describe a Brain Computer Interface (BCI) for navigation. The system is based on detecting brain signals that are elicited by tactile stimulation on the torso indicating the desired direction.

  13. Vision-Based Fall Detection with Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Adrián Núñez-Marcos

    2017-01-01

    Full Text Available One of the biggest challenges in modern societies is the improvement of healthy aging and the support to older persons in their daily activities. In particular, given its social and economic impact, the automatic detection of falls has attracted considerable attention in the computer vision and pattern recognition communities. Although the approaches based on wearable sensors have provided high detection rates, some of the potential users are reluctant to wear them and thus their use is not yet normalized. As a consequence, alternative approaches such as vision-based methods have emerged. We firmly believe that the irruption of the Smart Environments and the Internet of Things paradigms, together with the increasing number of cameras in our daily environment, forms an optimal context for vision-based systems. Consequently, here we propose a vision-based solution using Convolutional Neural Networks to decide if a sequence of frames contains a person falling. To model the video motion and make the system scenario independent, we use optical flow images as input to the networks followed by a novel three-step training phase. Furthermore, our method is evaluated in three public datasets achieving the state-of-the-art results in all three of them.

  14. Internet-Based Indoor Navigation Services

    OpenAIRE

    Zeinalipour-Yazti, Demetrios; Laoudias, Christos; Georgiou, Kyriakos

    2017-01-01

    Smartphone advances are leading to a class of Internet-based Indoor Navigation services. IIN services rely on geolocation databases that store indoor models, comprising floor maps and points of interest, along with wireless, light, and magnetic signals for localizing users. Developing IIN services creates new information management challenges - such as crowdsourcing indoor models, acquiring and fusing big data velocity signals, localization algorithms, and custodians' location privacy. Here, ...

  15. Real Time Vision System for Obstacle Detection and Localization on FPGA

    OpenAIRE

    Alhamwi , Ali; Vandeportaele , Bertrand; Piat , Jonathan

    2015-01-01

    International audience; Obstacle detection is a mandatory function for a robot navigating in an indoor environment especially when interaction with humans is done in a cluttered environment. Commonly used vision-based solutions like SLAM (Simultaneous Localization and Mapping) or optical flow tend to be computation intensive and require powerful computation resources to meet low speed real-time constraints. Solutions using LIDAR (Light Detection And Ranging) sensors are more robust but not co...

  16. Autonomous navigation and control of a Mars rover

    Science.gov (United States)

    Miller, D. P.; Atkinson, D. J.; Wilcox, B. H.; Mishkin, A. H.

    1990-01-01

    A Mars rover will need to be able to navigate autonomously kilometers at a time. This paper outlines the sensing, perception, planning, and execution monitoring systems that are currently being designed for the rover. The sensing is based around stereo vision. The interpretation of the images use a registration of the depth map with a global height map provided by an orbiting spacecraft. Safe, low energy paths are then planned through the map, and expectations of what the rover's articulation sensors should sense are generated. These expectations are then used to ensure that the planned path is correctly being executed.

  17. Blind's Eye: Employing Google Directions API for Outdoor Navigation of Visually Impaired Pedestrians

    Directory of Open Access Journals (Sweden)

    SABA FEROZMEMON

    2017-07-01

    Full Text Available Vision plays a paramount role in our everyday life and assists human in almost every walk of life. The people lacking vision sense require assistance to move freely. The inability of unassisted navigation and orientation in outdoor environments is one of the most important constraints for people with visual impairment. Motivated by this problem, we developed a simplified and user friendly navigation system that allows visually impaired pedestrians to reach their desired outdoor location. We designed a Braille keyboard to allow the blind user to input their destination. The proposed system makes use of Google Directions API (Application Program Interface to generate the right path to a destination. The visually impaired pedestrians have to wear a vibration belt to keep them on the track. The evaluation exposes shortcomings of Google Directions API when used for navigating the visually impaired pedestrians in an outdoor environment.

  18. Visual impairment secondary to congenital glaucoma in children: visual responses, optical correction and use of low vision AIDS

    Directory of Open Access Journals (Sweden)

    Maria Aparecida Onuki Haddad

    2009-01-01

    Full Text Available INTRODUCTION: Congenital glaucoma is frequently associated with visual impairment due to optic nerve damage, corneal opacities, cataracts and amblyopia. Poor vision in childhood is related to global developmental problems, and referral to vision habilitation/rehabilitation services should be without delay to promote efficient management of the impaired vision. OBJECTIVE: To analyze data concerning visual response, the use of optical correction and prescribed low vision aids in a population of children with congenital glaucoma. METHOD: The authors analyzed data from 100 children with congenital glaucoma to assess best corrected visual acuity, prescribed optical correction and low vision aids. RESULTS: Fifty-five percent of the sample were male, 43% female. The mean age was 6.3 years. Two percent presented normal visual acuity levels, 29% mild visual impairment, 28% moderate visual impairment, 15% severe visual impairment, 11% profound visual impairment, and 15% near blindness. Sixty-eight percent received optical correction for refractive errors. Optical low vision aids were adopted for distance vision in 34% of the patients and for near vision in 6%. A manual monocular telescopic system with 2.8 × magnification was the most frequently prescribed low vision aid for distance, and for near vision a +38 diopter illuminated stand magnifier was most frequently prescribed. DISCUSSION AND CONCLUSION: Careful low vision assessment and the appropriate prescription of optical corrections and low vision aids are mandatory in children with congenital glaucoma, since this will assist their global development, improving efficiency in daily life activities and promoting social and educational inclusion.

  19. Gait disorder rehabilitation using vision and non-vision based sensors: A systematic review

    Directory of Open Access Journals (Sweden)

    Asraf Ali

    2012-08-01

    Full Text Available Even though the amount of rehabilitation guidelines has never been greater, uncertainty continues to arise regarding the efficiency and effectiveness of the rehabilitation of gait disorders. This question has been hindered by the lack of information on accurate measurements of gait disorders. Thus, this article reviews the rehabilitation systems for gait disorder using vision and non-vision sensor technologies, as well as the combination of these. All papers published in the English language between 1990 and June, 2012 that had the phrases “gait disorder” “rehabilitation”, “vision sensor”, or “non vision sensor” in the title, abstract, or keywords were identified from the SpringerLink, ELSEVIER, PubMed, and IEEE databases. Some synonyms of these phrases and the logical words “and” “or” and “not” were also used in the article searching procedure. Out of the 91 published articles found, this review identified 84 articles that described the rehabilitation of gait disorders using different types of sensor technologies. This literature set presented strong evidence for the development of rehabilitation systems using a markerless vision-based sensor technology. We therefore believe that the information contained in this review paper will assist the progress of the development of rehabilitation systems for human gait disorders.

  20. Adaptive Human aware Navigation based on Motion Pattern Analysis

    DEFF Research Database (Denmark)

    Tranberg, Søren; Svenstrup, Mikael; Andersen, Hans Jørgen

    2009-01-01

    Respecting people’s social spaces is an important prerequisite for acceptable and natural robot navigation in human environments. In this paper, we describe an adaptive system for mobile robot navigation based on estimates of whether a person seeks to interact with the robot or not. The estimates...... are based on run-time motion pattern analysis compared to stored experience in a database. Using a potential field centered around the person, the robot positions itself at the most appropriate place relative to the person and the interaction status. The system is validated through qualitative tests...

  1. Indoor navigation by people with visual impairment using a digital sign system.

    Directory of Open Access Journals (Sweden)

    Gordon E Legge

    Full Text Available There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects-blind, low vision, blindfolded sighted, and normally sighted controls-were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment.

  2. Indoor navigation by people with visual impairment using a digital sign system.

    Science.gov (United States)

    Legge, Gordon E; Beckmann, Paul J; Tjan, Bosco S; Havey, Gary; Kramer, Kevin; Rolkosky, David; Gage, Rachel; Chen, Muzi; Puchakayala, Sravan; Rangarajan, Aravindhan

    2013-01-01

    There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects-blind, low vision, blindfolded sighted, and normally sighted controls-were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment.

  3. Image processing and applications based on visualizing navigation service

    Science.gov (United States)

    Hwang, Chyi-Wen

    2015-07-01

    When facing the "overabundant" of semantic web information, in this paper, the researcher proposes the hierarchical classification and visualizing RIA (Rich Internet Application) navigation system: Concept Map (CM) + Semantic Structure (SS) + the Knowledge on Demand (KOD) service. The aim of the Multimedia processing and empirical applications testing, was to investigating the utility and usability of this visualizing navigation strategy in web communication design, into whether it enables the user to retrieve and construct their personal knowledge or not. Furthermore, based on the segment markets theory in the Marketing model, to propose a User Interface (UI) classification strategy and formulate a set of hypermedia design principles for further UI strategy and e-learning resources in semantic web communication. These research findings: (1) Irrespective of whether the simple declarative knowledge or the complex declarative knowledge model is used, the "CM + SS + KOD navigation system" has a better cognition effect than the "Non CM + SS + KOD navigation system". However, for the" No web design experience user", the navigation system does not have an obvious cognition effect. (2) The essential of classification in semantic web communication design: Different groups of user have a diversity of preference needs and different cognitive styles in the CM + SS + KOD navigation system.

  4. [Spectral navigation technology and its application in positioning the fruits of fruit trees].

    Science.gov (United States)

    Yu, Xiao-Lei; Zhao, Zhi-Min

    2010-03-01

    An innovative technology of spectral navigation is presented in the present paper. This new method adopts reflectance spectra of fruits, leaves and branches as one of the key navigation parameters and positions the fruits of fruit trees relying on the diversity of spectral characteristics. The research results show that the distinct smoothness as effect is available in the spectrum of leaves of fruit trees. On the other hand, gradual increasing as the trend is an important feature in the spectrum of branches of fruit trees while the spectrum of fruit fluctuates. In addition, the peak diversity of reflectance rate between fruits and leaves of fruit trees is reached at 850 nm of wavelength. So the limit value can be designed at this wavelength in order to distinguish fruits and leaves. The method introduced here can not only quickly distinguish fruits, leaves and branches, but also avoid the effects of surroundings. Compared with the traditional navigation systems based on machine vision, there are still some special and unique features in the field of positioning the fruits of fruit trees using spectral navigation technology.

  5. Spatial navigation by congenitally blind individuals

    OpenAIRE

    Schinazi, Victor R.; Thrash, Tyler; Chebat, Daniel?Robert

    2015-01-01

    Spatial navigation in the absence of vision has been investigated from a variety of perspectives and disciplines. These different approaches have progressed our understanding of spatial knowledge acquisition by blind individuals, including their abilities, strategies, and corresponding mental representations. In this review, we propose a framework for investigating differences in spatial knowledge acquisition by blind and sighted people consisting of three longitudinal models (i.e., convergen...

  6. Cirurgia monocular para esotropias de grande ângulo: um novo paradigma Monocular surgery for large-angle esotropias: a new paradigm

    Directory of Open Access Journals (Sweden)

    Edmilson Gigante

    2009-02-01

    Full Text Available OBJETIVO: Demonstrar a viabilidade da cirurgia monocular no tratamento das esotropias de grande ângulo, praticando-se amplos recuos do reto medial (6 a 10 mm e grandes ressecções do reto lateral (8 a 10 mm. MÉTODOS: Foram operados, com anestesia geral e sem reajustes per ou pósoperatórios, 46 pacientes com esotropias de 50δ ou mais, relativamente comitantes. Os métodos utilizados para refratometria, medida da acuidade visual e do ângulo de desvio, foram os, tradicionalmente, utilizados em estrabologia. No pós-operatório, além das medidas na posição primária do olhar, foi feita uma avaliação da motilidade do olho operado, em adução e em abdução. RESULTADOS: Foram considerados quatro grupos de estudo, correspondendo a quatro períodos de tempo: uma semana, seis meses, dois anos e quatro a sete anos. Os resultados para o ângulo de desvio pós-cirúrgico foram compatíveis com os da literatura em geral e mantiveram-se estáveis ao longo do tempo. A motilidade do olho operado apresentou pequena limitação em adução e nenhuma em abdução, contrariando o encontrado na literatura estrabológica. Comparando os resultados de adultos com os de crianças e de amblíopes com não amblíopes, não foram encontradas diferenças estatisticamente significativas entre eles. CONCLUSÃO:Em face dos resultados encontrados, entende-se ser possível afirmar que a cirurgia monocular de recuo-ressecção pode ser considerada opção viável para o tratamento das esotropias de grande ângulo, tanto para adultos quanto para crianças, bem como para amblíopes e não amblíopes.PURPOSE: To demonstrate the feasibility of monocular surgery in the treatment of large-angle esotropias through large recessions of the medial rectus (6 to 10 mm and large resections of the lateral rectus (8 to 10 mm. METHODS: 46 patients were submitted to surgery. They had esotropias of 50Δor more that were relatively comitant. The patients were operated under general

  7. Advanced Navigation Aids System based on Augmented Reality

    Directory of Open Access Journals (Sweden)

    Jaeyong OH

    2016-12-01

    Full Text Available Many maritime accidents have been caused by human-error including such things as inadequate watch keeping and/or mistakes in ship handling. Also, new navigational equipment has been developed using Information Technology (IT technology to provide various kinds of information for safe navigation. Despite these efforts, the reduction of maritime accidents has not occurred to the degree expected because, navigational equipment provides too much information, and this information is not well organized, such that users feel it to be complicated rather than helpful. In this point of view, the method of representation of navigational information is more important than the quantity of that information and research is required on the representation of information to make that information more easily understood and to allow decisions to be made correctly and promptly. In this paper, we adopt Augmented Reality (AR technologies for the representation of information. AR is a 3D computer graphics technology that blends virtual reality and the real world. Recently, this technology has been widely applied in our daily lives because it can provide information more effectively to users. Therefore, we propose a new concept, a navigational system based on AR technology; we review experimental results from a ship-handling simulator and from an open sea test to verify the efficiency of the proposed system.

  8. Autonomous Rule Based Robot Navigation In Orchards

    DEFF Research Database (Denmark)

    Andersen, Jens Christian; Ravn, Ole; Andersen, Nils Axel

    2010-01-01

    Orchard navigation using sensor-based localization and exible mission management facilitates successful missions independent of the Global Positioning System (GPS). This is especially important while driving between tight tree rows where the GPS coverage is poor. This paper suggests localization ...

  9. Implementation of a mobile base with evasion of obstacles using ROS navigation

    International Nuclear Information System (INIS)

    Arauz Villegas, Carolina

    2013-01-01

    A mobile base is implemented with evasion of obstacles using ROS Navigation. The simulation of that mobile base is performed with the 2D Stage simulator, firstly; and once understanded the operation of Navigation has proceeded to build the mobile base. The mobile base has had two DC motors, an Asus Xtion sensor, a netbook and a stm32f4-discovery microcontroller. ROS in the netbook was installed and then serial communication over USB was achieved between the netbook and the microcontroller. The microcontroller has received the velocity data sent by move_base of navigation through that communication and sends the odometry data back. A PWM control is implemented for the driving and for the speed of the motors having a control PI for the cycle of work of the same. The encoder interface of the microcontroller was used for the data acquisition of odometry. The communication between ROS and the mobile base was integrated to the navigation, which has allowed to generate mapping, location and move safely from a starting point to a point of arrival sending speed messages. (author) [es

  10. A 3D Model Based Imdoor Navigation System for Hubei Provincial Museum

    Science.gov (United States)

    Xu, W.; Kruminaite, M.; Onrust, B.; Liu, H.; Xiong, Q.; Zlatanova, S.

    2013-11-01

    3D models are more powerful than 2D maps for indoor navigation in a complicate space like Hubei Provincial Museum because they can provide accurate descriptions of locations of indoor objects (e.g., doors, windows, tables) and context information of these objects. In addition, the 3D model is the preferred navigation environment by the user according to the survey. Therefore a 3D model based indoor navigation system is developed for Hubei Provincial Museum to guide the visitors of museum. The system consists of three layers: application, web service and navigation, which is built to support localization, navigation and visualization functions of the system. There are three main strengths of this system: it stores all data needed in one database and processes most calculations on the webserver which make the mobile client very lightweight, the network used for navigation is extracted semi-automatically and renewable, the graphic user interface (GUI), which is based on a game engine, has high performance of visualizing 3D model on a mobile display.

  11. Image-based path planning for automated virtual colonoscopy navigation

    Science.gov (United States)

    Hong, Wei

    2008-03-01

    Virtual colonoscopy (VC) is a noninvasive method for colonic polyp screening, by reconstructing three-dimensional models of the colon using computerized tomography (CT). In virtual colonoscopy fly-through navigation, it is crucial to generate an optimal camera path for efficient clinical examination. In conventional methods, the centerline of the colon lumen is usually used as the camera path. In order to extract colon centerline, some time consuming pre-processing algorithms must be performed before the fly-through navigation, such as colon segmentation, distance transformation, or topological thinning. In this paper, we present an efficient image-based path planning algorithm for automated virtual colonoscopy fly-through navigation without the requirement of any pre-processing. Our algorithm only needs the physician to provide a seed point as the starting camera position using 2D axial CT images. A wide angle fisheye camera model is used to generate a depth image from the current camera position. Two types of navigational landmarks, safe regions and target regions are extracted from the depth images. Camera position and its corresponding view direction are then determined using these landmarks. The experimental results show that the generated paths are accurate and increase the user comfort during the fly-through navigation. Moreover, because of the efficiency of our path planning algorithm and rendering algorithm, our VC fly-through navigation system can still guarantee 30 FPS.

  12. Towards Safe Navigation by Formalizing Navigation Rules

    Directory of Open Access Journals (Sweden)

    Arne Kreutzmann

    2013-06-01

    Full Text Available One crucial aspect of safe navigation is to obey all navigation regulations applicable, in particular the collision regulations issued by the International Maritime Organization (IMO Colregs. Therefore, decision support systems for navigation need to respect Colregs and this feature should be verifiably correct. We tackle compliancy of navigation regulations from a perspective of software verification. One common approach is to use formal logic, but it requires to bridge a wide gap between navigation concepts and simple logic. We introduce a novel domain specification language based on a spatio-temporal logic that allows us to overcome this gap. We are able to capture complex navigation concepts in an easily comprehensible representation that can direcly be utilized by various bridge systems and that allows for software verification.

  13. Autonomous Robot Navigation based on Visual Landmarks

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2005-01-01

    The use of landmarks for robot navigation is a popular alternative to having a geometrical model of the environment through which to navigate and monitor self-localization. If the landmarks are defined as special visual structures already in the environment then we have the possibility of fully a...... automatically learn and store visual landmarks, and later recognize these landmarks from arbitrary positions and thus estimate robot position and heading.......The use of landmarks for robot navigation is a popular alternative to having a geometrical model of the environment through which to navigate and monitor self-localization. If the landmarks are defined as special visual structures already in the environment then we have the possibility of fully...... autonomous navigation and self-localization using automatically selected landmarks. The thesis investigates autonomous robot navigation and proposes a new method which benefits from the potential of the visual sensor to provide accuracy and reliability to the navigation process while relying on naturally...

  14. Mobile Robot Navigation Based on Q-Learning Technique

    Directory of Open Access Journals (Sweden)

    Lazhar Khriji

    2011-03-01

    Full Text Available This paper shows how Q-learning approach can be used in a successful way to deal with the problem of mobile robot navigation. In real situations where a large number of obstacles are involved, normal Q-learning approach would encounter two major problems due to excessively large state space. First, learning the Q-values in tabular form may be infeasible because of the excessive amount of memory needed to store the table. Second, rewards in the state space may be so sparse that with random exploration they will only be discovered extremely slowly. In this paper, we propose a navigation approach for mobile robot, in which the prior knowledge is used within Q-learning. We address the issue of individual behavior design using fuzzy logic. The strategy of behaviors based navigation reduces the complexity of the navigation problem by dividing them in small actions easier for design and implementation. The Q-Learning algorithm is applied to coordinate between these behaviors, which make a great reduction in learning convergence times. Simulation and experimental results confirm the convergence to the desired results in terms of saved time and computational resources.

  15. A visual test based on a freeware software for quantifying and displaying night-vision disturbances: study in subjects after alcohol consumption.

    Science.gov (United States)

    Castro, José J; Ortiz, Carolina; Pozo, Antonio M; Anera, Rosario G; Soler, Margarita

    2014-05-07

    In this work, we propose the Halo test, a simple visual test based on a freeware software for quantifying and displaying night-vision disturbances perceived by subjects under different experimental conditions, more precisely studying the influence of the alcohol consumption on visual function. In the Halo test, viewed on a monitor, the subject's task consists of detecting luminous peripheral stimuli around a central high-luminance stimulus over a dark background. The test, performed by subjects before and after consuming alcoholic drinks, which deteriorate visual performance, evaluates the influence that alcohol consumption exerts on the visual-discrimination capacity under low illumination conditions. Measurements were made monocularly and binocularly. Pupil size was also measured in both conditions (pre/post). Additionally, we used a double-pass device to measure objectively the optical-quality of the eye and corroborate the results from the Halo test. We found a significant deterioration of the discrimination capacity after alcohol consumption, indicating that the higher the breath-alcohol content, the greater the deterioration of the visual-discrimination capacity. After alcohol intake, the graphical results showed a greater area of undetected peripheral stimuli around the central high-luminance stimulus. An enlargement of the pupil was also observed and the optical quality of the eye was deteriorated after alcohol consumption. A greater influence of halos and other night-vision disturbances were reported with the Halo test after alcohol consumption. The Halo freeware software constitutes a positive contribution for evaluating nighttime visual performance in clinical applications, such as reported here, but also in patients after refractive surgery (where halos are present) or for monitoring (time course) some ocular pathologies under pharmacological treatment.

  16. Parametric Covariance Model for Horizon-Based Optical Navigation

    Science.gov (United States)

    Hikes, Jacob; Liounis, Andrew J.; Christian, John A.

    2016-01-01

    This Note presents an entirely parametric version of the covariance for horizon-based optical navigation measurements. The covariance can be written as a function of only the spacecraft position, two sensor design parameters, the illumination direction, the size of the observed planet, the size of the lit arc to be used, and the total number of observed horizon points. As a result, one may now more clearly understand the sensitivity of horizon-based optical navigation performance as a function of these key design parameters, which is insight that was obscured in previous (and nonparametric) versions of the covariance. Finally, the new parametric covariance is shown to agree with both the nonparametric analytic covariance and results from a Monte Carlo analysis.

  17. Visual Peoplemeter: A Vision-based Television Audience Measurement System

    Directory of Open Access Journals (Sweden)

    SKELIN, A. K.

    2014-11-01

    Full Text Available Visual peoplemeter is a vision-based measurement system that objectively evaluates the attentive behavior for TV audience rating, thus offering solution to some of drawbacks of current manual logging peoplemeters. In this paper, some limitations of current audience measurement system are reviewed and a novel vision-based system aiming at passive metering of viewers is prototyped. The system uses camera mounted on a television as a sensing modality and applies advanced computer vision algorithms to detect and track a person, and to recognize attentional states. Feasibility of the system is evaluated on a secondary dataset. The results show that the proposed system can analyze viewer's attentive behavior, therefore enabling passive estimates of relevant audience measurement categories.

  18. Visions and visioning in foresight activities

    DEFF Research Database (Denmark)

    Jørgensen, Michael Søgaard; Grosu, Dan

    2007-01-01

    The paper discusses the roles of visioning processes and visions in foresight activities and in societal discourses and changes parallel to or following foresight activities. The overall topic can be characterised as the dynamics and mechanisms that make visions and visioning processes work...... or not work. The theoretical part of the paper presents an actor-network theory approach to the analyses of visions and visioning processes, where the shaping of the visions and the visioning and what has made them work or not work is analysed. The empirical part is based on analyses of the roles of visions...... and visioning processes in a number of foresight processes from different societal contexts. The analyses have been carried out as part of the work in the COST A22 network on foresight. A vision is here understood as a description of a desirable or preferable future, compared to a scenario which is understood...

  19. Comparison of the Lea Symbols and HOTV charts for preschool vision screening from age 3 to 4.5 years old

    Directory of Open Access Journals (Sweden)

    Ya-Hui Zhang

    2014-12-01

    Full Text Available AIM: To evaluate the applicability and the development of the normal visual acuity from age 3 to 3.5 years using Lea Symbols and HOTV charts.METHODS: It was a survey research study. Totally, 133 preschoolers(266 eyesbetween 3 to 4.5 years old recruited from two kid-gardens in Guangzhou were tested with both the Lea Symbols chart and the HOTV chart. Outcome measures were monocular logarithm of the minimum angle of resolution(logMARvisual acuity and inter-eye acuity difference in logMAR units for each test. RESULTS: The testability rates of the two charts were high(96.24% vs 92.48%, respectively, but difference was not statistically significant(P>0.05. The difference between the two kind of monocular vision was not statistically significant(the right eye: t=0.517, P=0.606; the left eye: t=-0.618, P=0.538. There was no significant difference between different eye(Lea Symbols chart: t=0.638, P=0.525; HOTV chart: t=-0.897, P=0.372. The visual acuities of the boys were better than those of the girls, but the difference was not statistically significant(P>0.05. The results which came from visual acuities with the two charts for the corresponding age groups(3-year-old group, 3.5-year-old group, 4-year-old group, 4.5-year-old groupindicated that the visual acuities of the preschoolers were improving with increasing age, but the difference among the visual acuities with the Lea Symbols chart was not statistically significant(the right eye: F=2.662, P=0.052; the left eye: F=1.850, P=0.143. However the difference among the visual acuities with the HOTV chart was statistically significant(the right eye: F=4.518, P=0.005; the left eye: F=3.893, P=0.011.CONCLUSION: Both Lea Symbols and HOTV chars are accepted and appropriate for preschool vision screening from 3 to 4.5 years old. The monocular visual acuity of preschoolers from age 3 to 4.5 years could be assessed was similar using the two charts. There is no correlation between visual acuity and different eye

  20. Perceiving space and optical cues via a visuo-tactile sensory substitution system: a methodological approach for training of blind subjects for navigation.

    Science.gov (United States)

    Segond, Hervé; Weiss, Déborah; Kawalec, Magdalena; Sampaio, Eliana

    2013-01-01

    A methodological approach to perceptual learning was used to allow both early blind subjects (experimental group) and blindfolded sighted subjects (control group) to experience optical information and spatial phenomena, on the basis of visuo-tactile information transmitted by a 64-taxel pneumatic sensory substitution device. The learning process allowed the subjects to develop abilities in spatial localisation, shape recognition (with generalisation to different points of view), and monocular depth cue interpretation. During the training phase, early blind people initially experienced more difficulties than blindfolded sighted subjects (having previous perceptual experience of perspective) with interpreting and using monocular depth cues. The amelioration of the performance for all blind subjects during training sessions and the quite similar level of performance reached by two groups in the final navigation tasks suggested that early blind people were able to develop and apply cognitive understanding of depth cues. Both groups showed generalisation of the learning from the initial phases to cue identification in the maze, and subjectively experienced shapes facing them. Subjects' performance depended not only on their perceptual experience but also on their previous spatial competencies.

  1. Heading-vector navigation based on head-direction cells and path integration.

    Science.gov (United States)

    Kubie, John L; Fenton, André A

    2009-05-01

    Insect navigation is guided by heading vectors that are computed by path integration. Mammalian navigation models, on the other hand, are typically based on map-like place representations provided by hippocampal place cells. Such models compute optimal routes as a continuous series of locations that connect the current location to a goal. We propose a "heading-vector" model in which head-direction cells or their derivatives serve both as key elements in constructing the optimal route and as the straight-line guidance during route execution. The model is based on a memory structure termed the "shortcut matrix," which is constructed during the initial exploration of an environment when a set of shortcut vectors between sequential pairs of visited waypoint locations is stored. A mechanism is proposed for calculating and storing these vectors that relies on a hypothesized cell type termed an "accumulating head-direction cell." Following exploration, shortcut vectors connecting all pairs of waypoint locations are computed by vector arithmetic and stored in the shortcut matrix. On re-entry, when local view or place representations query the shortcut matrix with a current waypoint and goal, a shortcut trajectory is retrieved. Since the trajectory direction is in head-direction compass coordinates, navigation is accomplished by tracking the firing of head-direction cells that are tuned to the heading angle. Section 1 of the manuscript describes the properties of accumulating head-direction cells. It then shows how accumulating head-direction cells can store local vectors and perform vector arithmetic to perform path-integration-based homing. Section 2 describes the construction and use of the shortcut matrix for computing direct paths between any pair of locations that have been registered in the shortcut matrix. In the discussion, we analyze the advantages of heading-based navigation over map-based navigation. Finally, we survey behavioral evidence that nonhippocampal

  2. Fully self-contained vision-aided navigation and landing of a micro air vehicle independent from external sensor inputs

    Science.gov (United States)

    Brockers, Roland; Susca, Sara; Zhu, David; Matthies, Larry

    2012-06-01

    Direct-lift micro air vehicles have important applications in reconnaissance. In order to conduct persistent surveillance in urban environments, it is essential that these systems can perform autonomous landing maneuvers on elevated surfaces that provide high vantage points without the help of any external sensor and with a fully contained on-board software solution. In this paper, we present a micro air vehicle that uses vision feedback from a single down looking camera to navigate autonomously and detect an elevated landing platform as a surrogate for a roof top. Our method requires no special preparation (labels or markers) of the landing location. Rather, leveraging the planar character of urban structure, the landing platform detection system uses a planar homography decomposition to detect landing targets and produce approach waypoints for autonomous landing. The vehicle control algorithm uses a Kalman filter based approach for pose estimation to fuse visual SLAM (PTAM) position estimates with IMU data to correct for high latency SLAM inputs and to increase the position estimate update rate in order to improve control stability. Scale recovery is achieved using inputs from a sonar altimeter. In experimental runs, we demonstrate a real-time implementation running on-board a micro aerial vehicle that is fully self-contained and independent from any external sensor information. With this method, the vehicle is able to search autonomously for a landing location and perform precision landing maneuvers on the detected targets.

  3. A Bionic Polarization Navigation Sensor and Its Calibration Method.

    Science.gov (United States)

    Zhao, Huijie; Xu, Wujian

    2016-08-03

    The polarization patterns of skylight which arise due to the scattering of sunlight in the atmosphere can be used by many insects for deriving compass information. Inspired by insects' polarized light compass, scientists have developed a new kind of navigation method. One of the key techniques in this method is the polarimetric sensor which is used to acquire direction information from skylight. In this paper, a polarization navigation sensor is proposed which imitates the working principles of the polarization vision systems of insects. We introduce the optical design and mathematical model of the sensor. In addition, a calibration method based on variable substitution and non-linear curve fitting is proposed. The results obtained from the outdoor experiments provide support for the feasibility and precision of the sensor. The sensor's signal processing can be well described using our mathematical model. A relatively high degree of accuracy in polarization measurement can be obtained without any error compensation.

  4. LASIK monocular en pacientes adultos con ambliopía por anisometropía

    Directory of Open Access Journals (Sweden)

    Alejandro Tamez-Peña

    2017-09-01

    Conclusiones: La cirugía refractiva monocular en pacientes con ambliopía por anisometropía es una opción terapéutica segura y efectiva que ofrece resultados visuales satisfactorios, preservando o incluso mejorando la AVMC preoperatoria.

  5. Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera

    National Research Council Canada - National Science Library

    Chen, J; Dixon, W. E; Dawson, D. M; Chitrakaran, V. K

    2004-01-01

    In this paper, a visual servo tracking controller for a wheeled mobile robot (WMR) is developed that utilizes feedback from a monocular camera system that is mounted with a fixed position and orientation...

  6. Quantifying the impact on navigation performance in visually impaired: Auditory information loss versus information gain enabled through electronic travel aids.

    Directory of Open Access Journals (Sweden)

    Alex Kreilinger

    Full Text Available This study's purpose was to analyze and quantify the impact of auditory information loss versus information gain provided by electronic travel aids (ETAs on navigation performance in people with low vision. Navigation performance of ten subjects (age: 54.9±11.2 years with visual acuities >1.0 LogMAR was assessed via the Graz Mobility Test (GMT. Subjects passed through a maze in three different modalities: 'Normal' with visual and auditory information available, 'Auditory Information Loss' with artificially reduced hearing (leaving only visual information, and 'ETA' with a vibrating ETA based on ultrasonic waves, thereby facilitating visual, auditory, and tactile information. Main performance measures comprised passage time and number of contacts. Additionally, head tracking was used to relate head movements to motion direction. When comparing 'Auditory Information Loss' to 'Normal', subjects needed significantly more time (p<0.001, made more contacts (p<0.001, had higher relative viewing angles (p = 0.002, and a higher percentage of orientation losses (p = 0.011. The only significant difference when comparing 'ETA' to 'Normal' was a reduced number of contacts (p<0.001. Our study provides objective, quantifiable measures of the impact of reduced hearing on the navigation performance in low vision subjects. Significant effects of 'Auditory Information Loss' were found for all measures; for example, passage time increased by 17.4%. These findings show that low vision subjects rely on auditory information for navigation. In contrast, the impact of the ETA was not significant but further analysis of head movements revealed two different coping strategies: half of the subjects used the ETA to increase speed, whereas the other half aimed at avoiding contacts.

  7. Improved GPS-based Satellite Relative Navigation Using Femtosecond Laser Relative Distance Measurements

    Directory of Open Access Journals (Sweden)

    Hyungjik Oh

    2016-03-01

    Full Text Available This study developed an approach for improving Carrier-phase Differential Global Positioning System (CDGPS based realtime satellite relative navigation by applying laser baseline measurement data. The robustness against the space operational environment was considered, and a Synthetic Wavelength Interferometer (SWI algorithm based on a femtosecond laser measurement model was developed. The phase differences between two laser wavelengths were combined to measure precise distance. Generated laser data were used to improve estimation accuracy for the float ambiguity of CDGPS data. Relative navigation simulations in real-time were performed using the extended Kalman filter algorithm. The GPS and laser-combined relative navigation accuracy was compared with GPS-only relative navigation solutions to determine the impact of laser data on relative navigation. In numerical simulations, the success rate of integer ambiguity resolution increased when laser data was added to GPS data. The relative navigational errors also improved five-fold and two-fold, relative to the GPS-only error, for 250 m and 5 km initial relative distances, respectively. The methodology developed in this study is suitable for application to future satellite formation-flying missions.

  8. Optical stimulator for vision-based sensors

    DEFF Research Database (Denmark)

    Rössler, Dirk; Pedersen, David Arge Klevang; Benn, Mathias

    2014-01-01

    We have developed an optical stimulator system for vision-based sensors. The stimulator is an efficient tool for stimulating a camera during on-ground testing with scenes representative of spacecraft flights. Such scenes include starry sky, planetary objects, and other spacecraft. The optical...

  9. A lightweight, inexpensive robotic system for insect vision.

    Science.gov (United States)

    Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex

    2017-09-01

    Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. A Multi-Sensorial Simultaneous Localization and Mapping (SLAM) System for Low-Cost Micro Aerial Vehicles in GPS-Denied Environments.

    Science.gov (United States)

    López, Elena; García, Sergio; Barea, Rafael; Bergasa, Luis M; Molinos, Eduardo J; Arroyo, Roberto; Romera, Eduardo; Pardo, Samuel

    2017-04-08

    One of the main challenges of aerial robots navigation in indoor or GPS-denied environments is position estimation using only the available onboard sensors. This paper presents a Simultaneous Localization and Mapping (SLAM) system that remotely calculates the pose and environment map of different low-cost commercial aerial platforms, whose onboard computing capacity is usually limited. The proposed system adapts to the sensory configuration of the aerial robot, by integrating different state-of-the art SLAM methods based on vision, laser and/or inertial measurements using an Extended Kalman Filter (EKF). To do this, a minimum onboard sensory configuration is supposed, consisting of a monocular camera, an Inertial Measurement Unit (IMU) and an altimeter. It allows to improve the results of well-known monocular visual SLAM methods (LSD-SLAM and ORB-SLAM are tested and compared in this work) by solving scale ambiguity and providing additional information to the EKF. When payload and computational capabilities permit, a 2D laser sensor can be easily incorporated to the SLAM system, obtaining a local 2.5D map and a footprint estimation of the robot position that improves the 6D pose estimation through the EKF. We present some experimental results with two different commercial platforms, and validate the system by applying it to their position control.

  11. Vision-Based Navigation and Recognition

    National Research Council Canada - National Science Library

    Rosenfeld, Azriel

    1998-01-01

    .... (4) Invariants: both geometric and other types. (5) Human faces: Analysis of images of human faces, including feature extraction, face recognition, compression, and recognition of facial expressions...

  12. Vision-Based Navigation and Recognition

    National Research Council Canada - National Science Library

    Rosenfeld, Azriel

    1996-01-01

    .... (4) Invariants -- both geometric and other types. (5) Human faces: Analysis of images of human faces, including feature extraction, face recognition, compression, and recognition of facial expressions...

  13. Improved artificial bee colony algorithm based gravity matching navigation method.

    Science.gov (United States)

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  14. Binocular combination in abnormal binocular vision.

    Science.gov (United States)

    Ding, Jian; Klein, Stanley A; Levi, Dennis M

    2013-02-08

    We investigated suprathreshold binocular combination in humans with abnormal binocular visual experience early in life. In the first experiment we presented the two eyes with equal but opposite phase shifted sine waves and measured the perceived phase of the cyclopean sine wave. Normal observers have balanced vision between the two eyes when the two eyes' images have equal contrast (i.e., both eyes contribute equally to the perceived image and perceived phase = 0°). However, in observers with strabismus and/or amblyopia, balanced vision requires a higher contrast image in the nondominant eye (NDE) than the dominant eye (DE). This asymmetry between the two eyes is larger than predicted from the contrast sensitivities or monocular perceived contrast of the two eyes and is dependent on contrast and spatial frequency: more asymmetric with higher contrast and/or spatial frequency. Our results also revealed a surprising NDE-to-DE enhancement in some of our abnormal observers. This enhancement is not evident in normal vision because it is normally masked by interocular suppression. However, in these abnormal observers the NDE-to-DE suppression was weak or absent. In the second experiment, we used the identical stimuli to measure the perceived contrast of a cyclopean grating by matching the binocular combined contrast to a standard contrast presented to the DE. These measures provide strong constraints for model fitting. We found asymmetric interocular interactions in binocular contrast perception, which was dependent on both contrast and spatial frequency in the same way as in phase perception. By introducing asymmetric parameters to the modified Ding-Sperling model including interocular contrast gain enhancement, we succeeded in accounting for both binocular combined phase and contrast simultaneously. Adding binocular contrast gain control to the modified Ding-Sperling model enabled us to predict the results of dichoptic and binocular contrast discrimination experiments

  15. Dual-EKF-Based Real-Time Celestial Navigation for Lunar Rover

    Directory of Open Access Journals (Sweden)

    Li Xie

    2012-01-01

    Full Text Available A key requirement of lunar rover autonomous navigation is to acquire state information accurately in real-time during its motion and set up a gradual parameter-based nonlinear kinematics model for the rover. In this paper, we propose a dual-extended-Kalman-filter- (dual-EKF- based real-time celestial navigation (RCN method. The proposed method considers the rover position and velocity on the lunar surface as the system parameters and establishes a constant velocity (CV model. In addition, the attitude quaternion is considered as the system state, and the quaternion differential equation is established as the state equation, which incorporates the output of angular rate gyroscope. Therefore, the measurement equation can be established with sun direction vector from the sun sensor and speed observation from the speedometer. The gyro continuous output ensures the algorithm real-time operation. Finally, we use the dual-EKF method to solve the system equations. Simulation results show that the proposed method can acquire the rover position and heading information in real time and greatly improve the navigation accuracy. Our method overcomes the disadvantage of the cumulative error in inertial navigation.

  16. Interference and deception detection technology of satellite navigation based on deep learning

    Science.gov (United States)

    Chen, Weiyi; Deng, Pingke; Qu, Yi; Zhang, Xiaoguang; Li, Yaping

    2017-10-01

    Satellite navigation system plays an important role in people's daily life and war. The strategic position of satellite navigation system is prominent, so it is very important to ensure that the satellite navigation system is not disturbed or destroyed. It is a critical means to detect the jamming signal to avoid the accident in a navigation system. At present, the detection technology of jamming signal in satellite navigation system is not intelligent , mainly relying on artificial decision and experience. For this issue, the paper proposes a method based on deep learning to monitor the interference source in a satellite navigation. By training the interference signal data, and extracting the features of the interference signal, the detection sys tem model is constructed. The simulation results show that, the detection accuracy of our detection system can reach nearly 70%. The method in our paper provides a new idea for the research on intelligent detection of interference and deception signal in a satellite navigation system.

  17. ERROR DETECTION BY ANTICIPATION FOR VISION-BASED CONTROL

    Directory of Open Access Journals (Sweden)

    A ZAATRI

    2001-06-01

    Full Text Available A vision-based control system has been developed.  It enables a human operator to remotely direct a robot, equipped with a camera, towards targets in 3D space by simply pointing on their images with a pointing device. This paper presents an anticipatory system, which has been designed for improving the safety and the effectiveness of the vision-based commands. It simulates these commands in a virtual environment. It attempts to detect hard contacts that may occur between the robot and its environment, which can be caused by machine errors or operator errors as well.

  18. Data Analysis Techniques for a Lunar Surface Navigation System Testbed

    Science.gov (United States)

    Chelmins, David; Sands, O. Scott; Swank, Aaron

    2011-01-01

    NASA is interested in finding new methods of surface navigation to allow astronauts to navigate on the lunar surface. In support of the Vision for Space Exploration, the NASA Glenn Research Center developed the Lunar Extra-Vehicular Activity Crewmember Location Determination System and performed testing at the Desert Research and Technology Studies event in 2009. A significant amount of sensor data was recorded during nine tests performed with six test subjects. This paper provides the procedure, formulas, and techniques for data analysis, as well as commentary on applications.

  19. Engineering satellite-based navigation and timing global navigation satellite systems, signals, and receivers

    CERN Document Server

    Betz, J

    2016-01-01

    This book describes the design and performance analysis of satnav systems, signals, and receivers. It also provides succinct descriptions and comparisons of all the world’s satnav systems. Its comprehensive and logical structure addresses all satnav signals and systems in operation and being developed. Engineering Satellite-Based Navigation and Timing: Global Navigation Satellite Systems, Signals, and Receivers provides the technical foundation for designing and analyzing satnav signals, systems, and receivers. Its contents and structure address all satnav systems and signals: legacy, modernized, and new. It combines qualitative information with detailed techniques and analyses, providing a comprehensive set of insights and engineering tools for this complex multidisciplinary field. Part I describes system and signal engineering including orbital mechanics and constellation design, signal design principles and underlying considerations, link budgets, qua tifying receiver performance in interference, and e...

  20. A Novel Augmented Reality Navigation System for Endoscopic Sinus and Skull Base Surgery: A Feasibility Study

    Science.gov (United States)

    Li, Liang; Yang, Jian; Chu, Yakui; Wu, Wenbo; Xue, Jin; Liang, Ping; Chen, Lei

    2016-01-01

    Objective To verify the reliability and clinical feasibility of a self-developed navigation system based on an augmented reality technique for endoscopic sinus and skull base surgery. Materials and Methods In this study we performed a head phantom and cadaver experiment to determine the display effect and accuracy of our navigational system. We compared cadaver head-based simulated operations, the target registration error, operation time, and National Aeronautics and Space Administration Task Load Index scores of our navigation system to conventional navigation systems. Results The navigation system developed in this study has a novel display mode capable of fusing endoscopic images to three-dimensional (3-D) virtual images. In the cadaver head experiment, the target registration error was 1.28 ± 0.45 mm, which met the accepted standards of a navigation system used for nasal endoscopic surgery. Compared with conventional navigation systems, the new system was more effective in terms of operation time and the mental workload of surgeons, which is especially important for less experienced surgeons. Conclusion The self-developed augmented reality navigation system for endoscopic sinus and skull base surgery appears to have advantages that outweigh those of conventional navigation systems. We conclude that this navigational system will provide rhinologists with more intuitive and more detailed imaging information, thus reducing the judgment time and mental workload of surgeons when performing complex sinus and skull base surgeries. Ultimately, this new navigational system has potential to increase the quality of surgeries. In addition, the augmented reality navigational system could be of interest to junior doctors being trained in endoscopic techniques because it could speed up their learning. However, it should be noted that the navigation system serves as an adjunct to a surgeon’s skills and knowledge, not as a substitute. PMID:26757365

  1. A Novel Augmented Reality Navigation System for Endoscopic Sinus and Skull Base Surgery: A Feasibility Study.

    Directory of Open Access Journals (Sweden)

    Liang Li

    Full Text Available To verify the reliability and clinical feasibility of a self-developed navigation system based on an augmented reality technique for endoscopic sinus and skull base surgery.In this study we performed a head phantom and cadaver experiment to determine the display effect and accuracy of our navigational system. We compared cadaver head-based simulated operations, the target registration error, operation time, and National Aeronautics and Space Administration Task Load Index scores of our navigation system to conventional navigation systems.The navigation system developed in this study has a novel display mode capable of fusing endoscopic images to three-dimensional (3-D virtual images. In the cadaver head experiment, the target registration error was 1.28 ± 0.45 mm, which met the accepted standards of a navigation system used for nasal endoscopic surgery. Compared with conventional navigation systems, the new system was more effective in terms of operation time and the mental workload of surgeons, which is especially important for less experienced surgeons.The self-developed augmented reality navigation system for endoscopic sinus and skull base surgery appears to have advantages that outweigh those of conventional navigation systems. We conclude that this navigational system will provide rhinologists with more intuitive and more detailed imaging information, thus reducing the judgment time and mental workload of surgeons when performing complex sinus and skull base surgeries. Ultimately, this new navigational system has potential to increase the quality of surgeries. In addition, the augmented reality navigational system could be of interest to junior doctors being trained in endoscopic techniques because it could speed up their learning. However, it should be noted that the navigation system serves as an adjunct to a surgeon's skills and knowledge, not as a substitute.

  2. Capturing age-related changes in functional contrast sensitivity with decreasing light levels in monocular and binocular vision.

    Science.gov (United States)

    Gillespie-Gallery, Hanna; Konstantakopoulou, Evgenia; Harlow, Jonathan A; Barbur, John L

    2013-09-09

    It is challenging to separate the effects of normal aging of the retina and visual pathways independently from optical factors, decreased retinal illuminance, and early stage disease. This study determined limits to describe the effect of light level on normal, age-related changes in monocular and binocular functional contrast sensitivity. We recruited 95 participants aged 20 to 85 years. Contrast thresholds for correct orientation discrimination of the gap in a Landolt C optotype were measured using a 4-alternative, forced-choice (4AFC) procedure at screen luminances from 34 to 0.12 cd/m(2) at the fovea and parafovea (0° and ±4°). Pupil size was measured continuously. The Health of the Retina index (HRindex) was computed to capture the loss of contrast sensitivity with decreasing light level. Participants were excluded if they exhibited performance outside the normal limits of interocular differences or HRindex values, or signs of ocular disease. Parafoveal contrast thresholds showed a steeper decline and higher correlation with age at the parafovea than the fovea. Of participants with clinical signs of ocular disease, 83% had HRindex values outside the normal limits. Binocular summation of contrast signals declined with age, independent of interocular differences. The HRindex worsens more rapidly with age at the parafovea, consistent with histologic findings of rod loss and its link to age-related degenerative disease of the retina. The HRindex and interocular differences could be used to screen for and separate the earliest stages of subclinical disease from changes caused by normal aging.

  3. An FPGA-Based Omnidirectional Vision Sensor for Motion Detection on Mobile Robots

    Directory of Open Access Journals (Sweden)

    Jones Y. Mori

    2012-01-01

    Full Text Available This work presents the development of an integrated hardware/software sensor system for moving object detection and distance calculation, based on background subtraction algorithm. The sensor comprises a catadioptric system composed by a camera and a convex mirror that reflects the environment to the camera from all directions, obtaining a panoramic view. The sensor is used as an omnidirectional vision system, allowing for localization and navigation tasks of mobile robots. Several image processing operations such as filtering, segmentation and morphology have been included in the processing architecture. For achieving distance measurement, an algorithm to determine the center of mass of a detected object was implemented. The overall architecture has been mapped onto a commercial low-cost FPGA device, using a hardware/software co-design approach, which comprises a Nios II embedded microprocessor and specific image processing blocks, which have been implemented in hardware. The background subtraction algorithm was also used to calibrate the system, allowing for accurate results. Synthesis results show that the system can achieve a throughput of 26.6 processed frames per second and the performance analysis pointed out that the overall architecture achieves a speedup factor of 13.78 in comparison with a PC-based solution running on the real-time operating system xPC Target.

  4. Uav Visual Autolocalizaton Based on Automatic Landmark Recognition

    Science.gov (United States)

    Silva Filho, P.; Shiguemori, E. H.; Saotome, O.

    2017-08-01

    Deploying an autonomous unmanned aerial vehicle in GPS-denied areas is a highly discussed problem in the scientific community. There are several approaches being developed, but the main strategies yet considered are computer vision based navigation systems. This work presents a new real-time computer-vision position estimator for UAV navigation. The estimator uses images captured during flight to recognize specific, well-known, landmarks in order to estimate the latitude and longitude of the aircraft. The method was tested in a simulated environment, using a dataset of real aerial images obtained in previous flights, with synchronized images, GPS and IMU data. The estimated position in each landmark recognition was compatible with the GPS data, stating that the developed method can be used as an alternative navigation system.

  5. Evaluation of a Hospital-Based Pneumonia Nurse Navigator Program.

    Science.gov (United States)

    Seldon, Lisa E; McDonough, Kelly; Turner, Barbara; Simmons, Leigh Ann

    2016-12-01

    The aim of this study is to evaluate the effectiveness of a hospital-based pneumonia nurse navigator program. This study used a retrospective, formative evaluation. Data of patients admitted from January 2012 through December 2014 to a large community hospital with a primary or secondary diagnosis of pneumonia, excluding aspiration pneumonia, were used. Data included patient demographics, diagnoses, insurance coverage, core measures, average length of stay (ALOS), disposition, readmission rate, financial outcomes, and patient barriers to care were collected. Descriptive statistics and parametric testing were used to analyze data. Core measure performance was sustained at the 90th percentile 2 years after the implementation of the navigator program. The ALOS did not decrease to established benchmarks; however, the SD for ALOS decreased by nearly half after implementation of the navigator program, suggesting the program decreased the number and length of extended stays. Charges per case decreased by 21% from 2012 to 2014. Variable costs decreased by 4% over a 2-year period, which increased net profit per case by 5%. Average readmission payments increased by 8% from 2012 to 2014, and the net revenue per case increased by 8.3%. The pneumonia nurse navigator program may improve core measures, reduce ALOS, and increase net revenue. Future evaluations are necessary to substantiate these findings and optimize the cost and quality performance of navigator programs.

  6. Image-based navigation for a robotized flexible endoscope

    NARCIS (Netherlands)

    van der Stap, N.; Slump, Cornelis H.; Broeders, Ivo Adriaan Maria Johannes; van der Heijden, Ferdinand; Luo, Xiongbiao; Reichl, Tobias; Mirota, Daniel; Soper, Timothy

    2014-01-01

    Robotizing flexible endoscopy enables image-based control of endoscopes. Especially during high-throughput procedures, such as a colonoscopy, navigation support algorithms could improve procedure turnaround and ergonomics for the endoscopist. In this study, we have developed and implemented a

  7. ATON (Autonomous Terrain-based Optical Navigation) for exploration missions: recent flight test results

    Science.gov (United States)

    Theil, S.; Ammann, N.; Andert, F.; Franz, T.; Krüger, H.; Lehner, H.; Lingenauber, M.; Lüdtke, D.; Maass, B.; Paproth, C.; Wohlfeil, J.

    2018-03-01

    Since 2010 the German Aerospace Center is working on the project Autonomous Terrain-based Optical Navigation (ATON). Its objective is the development of technologies which allow autonomous navigation of spacecraft in orbit around and during landing on celestial bodies like the Moon, planets, asteroids and comets. The project developed different image processing techniques and optical navigation methods as well as sensor data fusion. The setup—which is applicable to many exploration missions—consists of an inertial measurement unit, a laser altimeter, a star tracker and one or multiple navigation cameras. In the past years, several milestones have been achieved. It started with the setup of a simulation environment including the detailed simulation of camera images. This was continued by hardware-in-the-loop tests in the Testbed for Robotic Optical Navigation (TRON) where images were generated by real cameras in a simulated downscaled lunar landing scene. Data were recorded in helicopter flight tests and post-processed in real-time to increase maturity of the algorithms and to optimize the software. Recently, two more milestones have been achieved. In late 2016, the whole navigation system setup was flying on an unmanned helicopter while processing all sensor information onboard in real time. For the latest milestone the navigation system was tested in closed-loop on the unmanned helicopter. For that purpose the ATON navigation system provided the navigation state for the guidance and control of the unmanned helicopter replacing the GPS-based standard navigation system. The paper will give an introduction to the ATON project and its concept. The methods and algorithms of ATON are briefly described. The flight test results of the latest two milestones are presented and discussed.

  8. Autonomous Collision-Free Navigation of Microvehicles in Complex and Dynamically Changing Environments.

    Science.gov (United States)

    Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph

    2017-09-26

    Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.

  9. Quad-Rotor Helicopter Autonomous Navigation Based on Vanishing Point Algorithm

    Directory of Open Access Journals (Sweden)

    Jialiang Wang

    2014-01-01

    Full Text Available Quad-rotor helicopter is becoming popular increasingly as they can well implement many flight missions in more challenging environments, with lower risk of damaging itself and its surroundings. They are employed in many applications, from military operations to civilian tasks. Quad-rotor helicopter autonomous navigation based on the vanishing point fast estimation (VPFE algorithm using clustering principle is implemented in this paper. For images collected by the camera of quad-rotor helicopter, the system executes the process of preprocessing of image, deleting noise interference, edge extracting using Canny operator, and extracting straight lines by randomized hough transformation (RHT method. Then system obtains the position of vanishing point and regards it as destination point and finally controls the autonomous navigation of the quad-rotor helicopter by continuous modification according to the calculated navigation error. The experimental results show that the quad-rotor helicopter can implement the destination navigation well in the indoor environment.

  10. A wearable mobility device for the blind using retina-inspired dynamic vision sensors.

    Science.gov (United States)

    Ghaderi, Viviane S; Mulas, Marcello; Pereira, Vinicius Felisberto Santos; Everding, Lukas; Weikersdorfer, David; Conradt, Jorg

    2015-01-01

    Proposed is a prototype of a wearable mobility device which aims to assist the blind with navigation and object avoidance via auditory-vision-substitution. The described system uses two dynamic vision sensors and event-based information processing techniques to extract depth information. The 3D visual input is then processed using three different strategies, and converted to a 3D output sound using an individualized head-related transfer function. The performance of the device with different processing strategies is evaluated via initial tests with ten subjects. The outcome of these tests demonstrate promising performance of the system after only very short training times of a few minutes due to the minimal encoding of outputs from the vision sensors which are translated into simple sound patterns easily interpretable for the user. The envisioned system will allow for efficient real-time algorithms on a hands-free and lightweight device with exceptional battery life-time.

  11. Trajectory Analysis and Prediction for Improved Pedestrian Safety

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Trivedi, Mohan M.; Moeslund, Thomas B.

    2015-01-01

    This paper presents a monocular and purely vision based pedestrian trajectory tracking and prediction framework with integrated map-based hazard inference. In Advanced Driver Assistance systems research, a lot of effort has been put into pedestrian detection over the last decade, and several pede...

  12. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Hyungjin Kim

    2015-08-01

    Full Text Available Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments

  13. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    Science.gov (United States)

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-08-31

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.

  14. DRIFT-FREE INDOOR NAVIGATION USING SIMULTANEOUS LOCALIZATION AND MAPPING OF THE AMBIENT HETEROGENEOUS MAGNETIC FIELD

    Directory of Open Access Journals (Sweden)

    J. C. K. Chow

    2017-09-01

    Full Text Available In the absence of external reference position information (e.g. surveyed targets or Global Navigation Satellite Systems Simultaneous Localization and Mapping (SLAM has proven to be an effective method for indoor navigation. The positioning drift can be reduced with regular loop-closures and global relaxation as the backend, thus achieving a good balance between exploration and exploitation. Although vision-based systems like laser scanners are typically deployed for SLAM, these sensors are heavy, energy inefficient, and expensive, making them unattractive for wearables or smartphone applications. However, the concept of SLAM can be extended to non-optical systems such as magnetometers. Instead of matching features such as walls and furniture using some variation of the Iterative Closest Point algorithm, the local magnetic field can be matched to provide loop-closure and global trajectory updates in a Gaussian Process (GP SLAM framework. With a MEMS-based inertial measurement unit providing a continuous trajectory, and the matching of locally distinct magnetic field maps, experimental results in this paper show that a drift-free navigation solution in an indoor environment with millimetre-level accuracy can be achieved. The GP-SLAM approach presented can be formulated as a maximum a posteriori estimation problem and it can naturally perform loop-detection, feature-to-feature distance minimization, global trajectory optimization, and magnetic field map estimation simultaneously. Spatially continuous features (i.e. smooth magnetic field signatures are used instead of discrete feature correspondences (e.g. point-to-point as in conventional vision-based SLAM. These position updates from the ambient magnetic field also provide enough information for calibrating the accelerometer bias and gyroscope bias in-use. The only restriction for this method is the need for magnetic disturbances (which is typically not an issue for indoor environments; however

  15. Drift-Free Indoor Navigation Using Simultaneous Localization and Mapping of the Ambient Heterogeneous Magnetic Field

    Science.gov (United States)

    Chow, J. C. K.

    2017-09-01

    In the absence of external reference position information (e.g. surveyed targets or Global Navigation Satellite Systems) Simultaneous Localization and Mapping (SLAM) has proven to be an effective method for indoor navigation. The positioning drift can be reduced with regular loop-closures and global relaxation as the backend, thus achieving a good balance between exploration and exploitation. Although vision-based systems like laser scanners are typically deployed for SLAM, these sensors are heavy, energy inefficient, and expensive, making them unattractive for wearables or smartphone applications. However, the concept of SLAM can be extended to non-optical systems such as magnetometers. Instead of matching features such as walls and furniture using some variation of the Iterative Closest Point algorithm, the local magnetic field can be matched to provide loop-closure and global trajectory updates in a Gaussian Process (GP) SLAM framework. With a MEMS-based inertial measurement unit providing a continuous trajectory, and the matching of locally distinct magnetic field maps, experimental results in this paper show that a drift-free navigation solution in an indoor environment with millimetre-level accuracy can be achieved. The GP-SLAM approach presented can be formulated as a maximum a posteriori estimation problem and it can naturally perform loop-detection, feature-to-feature distance minimization, global trajectory optimization, and magnetic field map estimation simultaneously. Spatially continuous features (i.e. smooth magnetic field signatures) are used instead of discrete feature correspondences (e.g. point-to-point) as in conventional vision-based SLAM. These position updates from the ambient magnetic field also provide enough information for calibrating the accelerometer bias and gyroscope bias in-use. The only restriction for this method is the need for magnetic disturbances (which is typically not an issue for indoor environments); however, no assumptions

  16. 76 FR 65540 - National Space-Based Positioning, Navigation, and Timing (PNT) Advisory Board; Meeting

    Science.gov (United States)

    2011-10-21

    .... L. 92-463, as amended), and the President's 2004 U.S. Space-Based Positioning, Navigation, and...-Based Positioning, Navigation and Timing Policy and Global Positioning System (GPS) modernization. Explore opportunities for enhancing the interoperability of GPS with other emerging international Global...

  17. Localisation accuracy of semi-dense monocular SLAM

    Science.gov (United States)

    Schreve, Kristiaan; du Plessies, Pieter G.; Rätsch, Matthias

    2017-06-01

    Understanding the factors that influence the accuracy of visual SLAM algorithms is very important for the future development of these algorithms. So far very few studies have done this. In this paper, a simulation model is presented and used to investigate the effect of the number of scene points tracked, the effect of the baseline length in triangulation and the influence of image point location uncertainty. It is shown that the latter is very critical, while the other all play important roles. Experiments with a well known semi-dense visual SLAM approach are also presented, when used in a monocular visual odometry mode. The experiments shows that not including sensor bias and scale factor uncertainty is very detrimental to the accuracy of the simulation results.

  18. Computer-assisted surgery: virtual- and augmented-reality displays for navigation during urological interventions.

    Science.gov (United States)

    van Oosterom, Matthias N; van der Poel, Henk G; Navab, Nassir; van de Velde, Cornelis J H; van Leeuwen, Fijs W B

    2018-03-01

    To provide an overview of the developments made for virtual- and augmented-reality navigation procedures in urological interventions/surgery. Navigation efforts have demonstrated potential in the field of urology by supporting guidance for various disorders. The navigation approaches differ between the individual indications, but seem interchangeable to a certain extent. An increasing number of pre- and intra-operative imaging modalities has been used to create detailed surgical roadmaps, namely: (cone-beam) computed tomography, MRI, ultrasound, and single-photon emission computed tomography. Registration of these surgical roadmaps with the real-life surgical view has occurred in different forms (e.g. electromagnetic, mechanical, vision, or near-infrared optical-based), whereby the combination of approaches was suggested to provide superior outcome. Soft-tissue deformations demand the use of confirmatory interventional (imaging) modalities. This has resulted in the introduction of new intraoperative modalities such as drop-in US, transurethral US, (drop-in) gamma probes and fluorescence cameras. These noninvasive modalities provide an alternative to invasive technologies that expose the patients to X-ray doses. Whereas some reports have indicated navigation setups provide equal or better results than conventional approaches, most trials have been performed in relatively small patient groups and clear follow-up data are missing. The reported computer-assisted surgery research concepts provide a glimpse in to the future application of navigation technologies in the field of urology.

  19. Smartphone Image Acquisition During Postmortem Monocular Indirect Ophthalmoscopy.

    Science.gov (United States)

    Lantz, Patrick E; Schoppe, Candace H; Thibault, Kirk L; Porter, William T

    2016-01-01

    The medical usefulness of smartphones continues to evolve as third-party applications exploit and expand on the smartphones' interface and capabilities. This technical report describes smartphone still-image capture techniques and video-sequence recording capabilities during postmortem monocular indirect ophthalmoscopy. Using these devices and techniques, practitioners can create photographic documentation of fundal findings, clinically and at autopsy, without the expense of a retinal camera. Smartphone image acquisition of fundal abnormalities can promote ophthalmological telemedicine--especially in regions or countries with limited resources--and facilitate prompt, accurate, and unbiased documentation of retinal hemorrhages in infants and young children. © 2015 American Academy of Forensic Sciences.

  20. Enriching location-based games with navigational game activities

    DEFF Research Database (Denmark)

    Nadarajah, Stephanie Githa; Overgaard, Benjamin Nicholas; Pedersen, Peder Walz

    2016-01-01

    Mobile location-based games are experiences that entertain its players by requiring interactions mainly at points of interest (POIs). Navigation between POIs often involve the use of either a physical or digital map, not taking advantage of the opportunity available to engage users in activities...

  1. Sensor-based control with digital maps association for global navigation: a real application for autonomous vehicles

    OpenAIRE

    Alves De Lima , Danilo; Corrêa Victorino , Alessandro

    2015-01-01

    International audience; This paper presents a sensor-based control strategy applied in the global navigation of autonomous vehicles in urban environments. Typically, sensor-based control performs local navigation tasks regarding some features perceived from the environment. However, when there is more than one possibility to go, like in road intersection, the vehicle control fails to accomplish its global navigation. In order to solve this problem, we propose the vehicle global navigation bas...

  2. The Relationship Between Fusion, Suppression, and Diplopia in Normal and Amblyopic Vision.

    Science.gov (United States)

    Spiegel, Daniel P; Baldwin, Alex S; Hess, Robert F

    2016-10-01

    Single vision occurs through a combination of fusion and suppression. When neither mechanism takes place, we experience diplopia. Under normal viewing conditions, the perceptual state depends on the spatial scale and interocular disparity. The purpose of this study was to examine the three perceptual states in human participants with normal and amblyopic vision. Participants viewed two dichoptically separated horizontal blurred edges with an opposite tilt (2.35°) and indicated their binocular percept: "one flat edge," "one tilted edge," or "two edges." The edges varied with scale (fine 4 min arc and coarse 32 min arc), disparity, and interocular contrast. We investigated how the binocular interactions vary in amblyopic (visual acuity [VA] > 0.2 logMAR, n = 4) and normal vision (VA ≤ 0 logMAR, n = 4) under interocular variations in stimulus contrast and luminance. In amblyopia, despite the established sensory dominance of the fellow eye, fusion prevails at the coarse scale and small disparities (75%). We also show that increasing the relative contrast to the amblyopic eye enhances the probability of fusion at the fine scale (from 18% to 38%), and leads to a reversal of the sensory dominance at coarse scale. In normal vision we found that interocular luminance imbalances disturbed binocular combination only at the fine scale in a way similar to that seen in amblyopia. Our results build upon the growing evidence that the amblyopic visual system is binocular and further show that the suppressive mechanisms rendering the amblyopic system functionally monocular are scale dependent.

  3. Image matching navigation based on fuzzy information

    Institute of Scientific and Technical Information of China (English)

    田玉龙; 吴伟仁; 田金文; 柳健

    2003-01-01

    In conventional image matching methods, the image matching process is mostly based on image statistic information. One aspect neglected by all these methods is that there is much fuzzy information contained in these images. A new fuzzy matching algorithm based on fuzzy similarity for navigation is presented in this paper. Because the fuzzy theory is of the ability of making good description of the fuzzy information contained in images, the image matching method based on fuzzy similarity would look forward to producing good performance results. Experimental results using matching algorithm based on fuzzy information also demonstrate its reliability and practicability.

  4. UAV VISUAL AUTOLOCALIZATON BASED ON AUTOMATIC LANDMARK RECOGNITION

    Directory of Open Access Journals (Sweden)

    P. Silva Filho

    2017-08-01

    Full Text Available Deploying an autonomous unmanned aerial vehicle in GPS-denied areas is a highly discussed problem in the scientific community. There are several approaches being developed, but the main strategies yet considered are computer vision based navigation systems. This work presents a new real-time computer-vision position estimator for UAV navigation. The estimator uses images captured during flight to recognize specific, well-known, landmarks in order to estimate the latitude and longitude of the aircraft. The method was tested in a simulated environment, using a dataset of real aerial images obtained in previous flights, with synchronized images, GPS and IMU data. The estimated position in each landmark recognition was compatible with the GPS data, stating that the developed method can be used as an alternative navigation system.

  5. Monocular display unit for 3D display with correct depth perception

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-11-01

    A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  6. Vision communications based on LED array and imaging sensor

    Science.gov (United States)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  7. Vision-based topological map building and localisation using persistent features

    CSIR Research Space (South Africa)

    Sabatta, DG

    2008-11-01

    Full Text Available stream_source_info Sabatta_2008.pdf.txt stream_content_type text/plain stream_size 32284 Content-Encoding UTF-8 stream_name Sabatta_2008.pdf.txt Content-Type text/plain; charset=UTF-8 Vision-based Topological Map... of topological mapping was introduced into the field of robotics following studies of human cogni- tive mapping undertaken by Kuipers [8]. Since then, much progress has been made in the field of vision-based topologi- cal mapping. Topological mapping lends...

  8. On a New Family of Kalman Filter Algorithms for Integrated Navigation

    Science.gov (United States)

    Mahboub, V.; Saadatseresht, M.; Ardalan, A. A.

    2017-09-01

    Here we present a review on a new family of Kalman filter algorithms which recently developed for integrated navigation. In particular it is useful for vision based navigation due to the type of data. Here we mainly focus on three algorithms namely weighted Total Kalman filter (WTKF), integrated Kalman filter (IKF) and constrained integrated Kalman filter (CIKF). The common characteristic of these algorithms is that they can consider the neglected random observed quantities which may appear in the dynamic model. Moreover, our approach makes use of condition equations and straightforward variance propagation rules. The WTKF algorithm can deal with problems with arbitrary weight matrixes. Both of the observation equations and system equations can be dynamic-errors-in-variables (DEIV) models in the IKF algorithms. In some problems a quadratic constraint may exist. They can be solved by CIKF algorithm. Finally, we compare four algorithms WTKF, IKF, CIKF and EKF in numerical examples.

  9. Vision-based human motion analysis: An overview

    NARCIS (Netherlands)

    Poppe, Ronald Walter

    2007-01-01

    Markerless vision-based human motion analysis has the potential to provide an inexpensive, non-obtrusive solution for the estimation of body poses. The significant research effort in this domain has been motivated by the fact that many application areas, including surveillance, Human-Computer

  10. EAST-AIA deployment under vacuum: Calibration of laser diagnostic system using computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yang, E-mail: yangyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Song, Yuntao; Cheng, Yong; Feng, Hansheng; Wu, Zhenwei; Li, Yingying; Sun, Yongjun; Zheng, Lei [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd, Hefei, Anhui (China); Bruno, Vincent; Eric, Villedieu [CEA-IRFM, F-13108 Saint-Paul-Lez-Durance (France)

    2016-11-15

    Highlights: • The first deployment of the EAST articulated inspection arm robot under vacuum is presented. • A computer vision based approach to measure the laser spot displacement is proposed. • An experiment on the real EAST tokamak is performed to validate the proposed measure approach, and the results shows that the measurement accuracy satisfies the requirement. - Abstract: For the operation of EAST tokamak, it is crucial to ensure that all the diagnostic systems are in the good condition in order to reflect the plasma status properly. However, most of the diagnostic systems are mounted inside the tokamak vacuum vessel, which makes them extremely difficult to maintain under high vacuum condition during the tokamak operation. Thanks to a system called EAST articulated inspection arm robot (EAST-AIA), the examination of these in-vessel diagnostic systems can be performed by an embedded camera carried by the robot. In this paper, a computer vision algorithm has been developed to calibrate a laser diagnostic system with the help of a monocular camera at the robot end. In order to estimate the displacement of the laser diagnostic system with respect to the vacuum vessel, several visual markers were attached to the inner wall. This experiment was conducted both on the EAST vacuum vessel mock-up and the real EAST tokamak under vacuum condition. As a result, the accuracy of the displacement measurement was within 3 mm under the current camera resolution, which satisfied the laser diagnostic system calibration.

  11. Vision-based vehicle detection and tracking algorithm design

    Science.gov (United States)

    Hwang, Junyeon; Huh, Kunsoo; Lee, Donghwi

    2009-12-01

    The vision-based vehicle detection in front of an ego-vehicle is regarded as promising for driver assistance as well as for autonomous vehicle guidance. The feasibility of vehicle detection in a passenger car requires accurate and robust sensing performance. A multivehicle detection system based on stereo vision has been developed for better accuracy and robustness. This system utilizes morphological filter, feature detector, template matching, and epipolar constraint techniques in order to detect the corresponding pairs of vehicles. After the initial detection, the system executes the tracking algorithm for the vehicles. The proposed system can detect front vehicles such as the leading vehicle and side-lane vehicles. The position parameters of the vehicles located in front are obtained based on the detection information. The proposed vehicle detection system is implemented on a passenger car, and its performance is verified experimentally.

  12. Geomagnetic storm effects on GPS based navigation

    Directory of Open Access Journals (Sweden)

    P. V. S. Rama Rao

    2009-05-01

    Full Text Available The energetic events on the sun, solar wind and subsequent effects on the Earth's geomagnetic field and upper atmosphere (ionosphere comprise space weather. Modern navigation systems that use radio-wave signals, reflecting from or propagating through the ionosphere as a means of determining range or distance, are vulnerable to a variety of effects that can degrade the performance of the navigational systems. In particular, the Global Positioning System (GPS that uses a constellation of earth orbiting satellites are affected due to the space weather phenomena.

    Studies made during two successive geomagnetic storms that occurred during the period from 8 to 12 November 2004, have clearly revealed the adverse affects on the GPS range delay as inferred from the Total Electron Content (TEC measurements made from a chain of seven dual frequency GPS receivers installed in the Indian sector. Significant increases in TEC at the Equatorial Ionization anomaly crest region are observed, resulting in increased range delay during the periods of the storm activity. Further, the storm time rapid changes occurring in TEC resulted in a number of phase slips in the GPS signal compared to those on quiet days. These phase slips often result in the loss of lock of the GPS receivers, similar to those that occur during strong(>10 dB L-band scintillation events, adversely affecting the GPS based navigation.

  13. Visual map and instruction-based bicycle navigation: a comparison of effects on behaviour.

    Science.gov (United States)

    de Waard, Dick; Westerhuis, Frank; Joling, Danielle; Weiland, Stella; Stadtbäumer, Ronja; Kaltofen, Leonie

    2017-09-01

    Cycling with a classic paper map was compared with navigating with a moving map displayed on a smartphone, and with auditory, and visual turn-by-turn route guidance. Spatial skills were found to be related to navigation performance, however only when navigating from a paper or electronic map, not with turn-by-turn (instruction based) navigation. While navigating, 25% of the time cyclists fixated at the devices that present visual information. Navigating from a paper map required most mental effort and both young and older cyclists preferred electronic over paper map navigation. In particular a turn-by-turn dedicated guidance device was favoured. Visual maps are in particular useful for cyclists with higher spatial skills. Turn-by-turn information is used by all cyclists, and it is useful to make these directions available in all devices. Practitioner Summary: Electronic navigation devices are preferred over a paper map. People with lower spatial skills benefit most from turn-by-turn guidance information, presented either auditory or on a dedicated device. People with higher spatial skills perform well with all devices. It is advised to keep in mind that all users benefit from turn-by-turn information when developing a navigation device for cyclists.

  14. Synthetic vision to augment sensor based vision for remotely piloted vehicles

    NARCIS (Netherlands)

    Tadema, J.; Koeners, J.; Theunissen, E.

    2006-01-01

    In the past fifteen years, several research programs have demonstrated potential advantages of synthetic vision technology for manned aviation. More recently, some research programs have focused on integrating synthetic vision technology into control stations for remotely controlled aircraft. The

  15. Linear study and bundle adjustment data fusion; Application to vision localization; Recherche lineaire et fusion de donnees par ajustement de faisceaux; Application a la localisation par vision

    Energy Technology Data Exchange (ETDEWEB)

    Michot, J.

    2010-12-09

    The works presented in this manuscript are in the field of computer vision, and tackle the problem of real-time vision based localization and 3D reconstruction. In this context, the trajectory of a camera and the 3D structure of the filmed scene are initially estimated by linear algorithms and then optimized by a nonlinear algorithm, bundle adjustment. The thesis first presents a new technique of line search, dedicated to the nonlinear minimization algorithms used in Structure-from-Motion. The proposed technique is not iterative and can be quickly installed in traditional bundle adjustment frameworks. This technique, called Global Algebraic Line Search (G-ALS), and its two-dimensional variant (Two way-ALS), accelerate the convergence of the bundle adjustment algorithm. The approximation of the re-projection error by an algebraic distance enables the analytical calculation of an effective displacement amplitude (or two amplitudes for the Two way-ALS variant) by solving a degree 3 (G-ALS) or 5 (Two way-ALS) polynomial. Our experiments, conducted on simulated and real data, show that this amplitude, which is optimal for the algebraic distance, is also efficient for the Euclidean distance and reduces the convergence time of minimizations. One difficulty of real-time tracking algorithms (monocular SLAM) is that the estimated trajectory is often affected by drifts: on the absolute orientation, position and scale. Since these algorithms are incremental, errors and approximations are accumulated throughout the trajectory and cause global drifts. In addition, a tracking vision system can always be dazzled or used under conditions which prevented temporarily to calculate the location of the system. To solve these problems, we propose to use an additional sensor measuring the displacement of the camera. The type of sensor used will vary depending on the targeted application (an odometer for a vehicle, a lightweight inertial navigation system for a person). We propose to

  16. Toward Functional Augmented Reality in Marine Navigation : A Cognitive Work Analysis

    NARCIS (Netherlands)

    Procee, S.; Borst, C.; van Paassen, M.M.; Mulder, M.; Bertram, V.

    2017-01-01

    Augmented Reality, (AR) also known as vision-overlay, can help the navigator to visually detect a dangerous target by the overlay of a synthetic image, thus providing a visual cue over the real world. This is the first paper of a series about the practicalities and consequences of implementing AR in

  17. Bio-robots automatic navigation with graded electric reward stimulation based on Reinforcement Learning.

    Science.gov (United States)

    Zhang, Chen; Sun, Chao; Gao, Liqiang; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Bio-robots based on brain computer interface (BCI) suffer from the lack of considering the characteristic of the animals in navigation. This paper proposed a new method for bio-robots' automatic navigation combining the reward generating algorithm base on Reinforcement Learning (RL) with the learning intelligence of animals together. Given the graded electrical reward, the animal e.g. the rat, intends to seek the maximum reward while exploring an unknown environment. Since the rat has excellent spatial recognition, the rat-robot and the RL algorithm can convergent to an optimal route by co-learning. This work has significant inspiration for the practical development of bio-robots' navigation with hybrid intelligence.

  18. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    Science.gov (United States)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  19. A Navigation/Positioning Service Based on Pseudolites Installed on Stratospheric Airships

    OpenAIRE

    辻井, 利昭; TSUJII, Toshiaki; 張替, 正敏; HARIGAE, Masatoshi

    2002-01-01

    Transmitters of GPS-like signals, called pseudolites (PL) or "pseudo-satellites" have been widely investigated as an additional ranging source for performance enhancement of GPS. Ground-based GPS augmentation systems using pseudolites have been investigated for several applications such as vehicle navigation in downtown urban canyons, positioning in deep open-cut pits and mines and precision landing of aircraft. The concept of an innovative GPS navigation/ positioning system augmented by airs...

  20. A State-of-the-Art Survey of Indoor Positioning and Navigation Systems and Technologies

    Directory of Open Access Journals (Sweden)

    Wilson Sakpere

    2017-12-01

    Full Text Available The research and use of positioning and navigation technologies outdoors has seen a steady and exponential growth. Based on this success, there have been attempts to implement these technologies indoors, leading to numerous studies. Most of the algorithms, techniques and technologies used have been implemented outdoors. However, how they fare indoors is different altogether. Thus, several technologies have been proposed and implemented to improve positioning and navigation indoors. Among them are Infrared (IR, Ultrasound, Audible Sound, Magnetic, Optical and Vision, Radio Frequency (RF, Visible Light, Pedestrian Dead Reckoning (PDR/Inertial Navigation System (INS and Hybrid. The RF technologies include Bluetooth, Ultra-wideband (UWB, Wireless Sensor Network (WSN, Wireless Local Area Network (WLAN, Radio-Frequency Identification (RFID and Near Field Communication (NFC. In addition, positioning techniques applied in indoor positioning systems include the signal properties and positioning algorithms. The prevalent signal properties are Angle of Arrival (AOA, Time of Arrival (TOA, Time Difference of Arrival (TDOA and Received Signal Strength Indication (RSSI, while the positioning algorithms are Triangulation, Trilateration, Proximity and Scene Analysis/ Fingerprinting. This paper presents a state-of-the-art survey of indoor positioning and navigation systems and technologies, and their use in various scenarios. It analyses distinct positioning technology metrics such as accuracy, complexity, cost, privacy, scalability and usability. This paper has profound implications for future studies of positioning and navigation.

  1. Sensory bases of navigation.

    Science.gov (United States)

    Gould, J L

    1998-10-08

    Navigating animals need to know both the bearing of their goal (the 'map' step), and how to determine that direction (the 'compass' step). Compasses are typically arranged in hierarchies, with magnetic backup as a last resort when celestial information is unavailable. Magnetic information is often essential to calibrating celestial cues, though, and repeated recalibration between celestial and magnetic compasses is important in many species. Most magnetic compasses are based on magnetite crystals, but others make use of induction or paramagnetic interactions between short-wavelength light and visual pigments. Though odors may be used in some cases, most if not all long-range maps probably depend on magnetite. Magnetitebased map senses are used to measure only latitude in some species, but provide the distance and direction of the goal in others.

  2. Fisiologia da visão binocular Physiology of binocular vision

    Directory of Open Access Journals (Sweden)

    Harley E. A. Bicas

    2004-02-01

    Full Text Available A visão binocular de seres humanos resulta da superposição quase completa dos campos visuais de cada olho, o que suscita discriminação perceptual de localizações espaciais de objetos relativamente ao observador (localização egocêntrica bem mais fina (estereopsia, mas isso ocorre em, apenas, uma faixa muito estreita (o horóptero. Aquém e além dela, acham-se presentes diplopia e confusão, sendo necessária supressão fisiológica (cortical para evitá-las. Analisa-se a geometria do horóptero e suas implicações fisiológicas (o desvio de Hillebrand, a partição de Kundt, a área de Panum, assim como aspectos clínicos da visão binocular normal (percepção simultânea, fusão, visão estereoscópica e de adaptações a seus estados afetados (supressão patológica, ambliopia, correspondência visual anômala.The binocular vision of human beings is given by the almost complete superimposition of the monocular visual fields, which allows a finer perceptual discrimination of the egocentric localization of objects in space (stereopsis but only within a very narrow band (the horopter. Before and beyond it, diplopia and confusion are present, so that a physiologic (cortical suppression is necessary to avoid them to become conscious. The geometry of the horopter and its physiologic implications (Hillebrand's deviation, Kundt's partition, Panum's area, stereoscopic vision are analyzed, as well as some clinical aspects of the normal binocular vision (simultaneous perception, fusion, stereoscopic vision and of adaptations to abnormal states (pathologic suppression, amblyopia, abnormal retinal correspondence.

  3. X-Ray Pulsar Based Navigation and Time Determination, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — DARPA recently initiated the XNAV program to undertake development of GPS independent, precision navigation and time determination based on observations of certain...

  4. Algorithmic strategies for FPGA-based vision

    OpenAIRE

    Lim, Yoong Kang

    2016-01-01

    As demands for real-time computer vision applications increase, implementations on alternative architectures have been explored. These architectures include Field-Programmable Gate Arrays (FPGAs), which offer a high degree of flexibility and parallelism. A problem with this is that many computer vision algorithms have been optimized for serial processing, and this often does not map well to FPGA implementation. This thesis introduces the concept of FPGA-tailored computer vision algorithms...

  5. Enabling Persistent Autonomy for Underwater Gliders with Ocean Model Predictions and Terrain Based Navigation

    Directory of Open Access Journals (Sweden)

    Andrew eStuntz

    2016-04-01

    Full Text Available Effective study of ocean processes requires sampling over the duration of long (weeks to months oscillation patterns. Such sampling requires persistent, autonomous underwater vehicles, that have a similarly long deployment duration. The spatiotemporal dynamics of the ocean environment, coupled with limited communication capabilities, make navigation and localization difficult, especially in coastal regions where the majority of interesting phenomena occur. In this paper, we consider the combination of two methods for reducing navigation and localization error; a predictive approach based on ocean model predictions and a prior information approach derived from terrain-based navigation. The motivation for this work is not only for real-time state estimation, but also for accurately reconstructing the actual path that the vehicle traversed to contextualize the gathered data, with respect to the science question at hand. We present an application for the practical use of priors and predictions for large-scale ocean sampling. This combined approach builds upon previous works by the authors, and accurately localizes the traversed path of an underwater glider over long-duration, ocean deployments. The proposed method takes advantage of the reliable, short-term predictions of an ocean model, and the utility of priors used in terrain-based navigation over areas of significant bathymetric relief to bound uncertainty error in dead-reckoning navigation. This method improves upon our previously published works by 1 demonstrating the utility of our terrain-based navigation method with multiple field trials, and 2 presenting a hybrid algorithm that combines both approaches to bound navigational error and uncertainty for long-term deployments of underwater vehicles. We demonstrate the approach by examining data from actual field trials with autonomous underwater gliders, and demonstrate an ability to estimate geographical location of an underwater glider to 2

  6. Vision-based obstacle recognition system for automated lawn mower robot development

    Science.gov (United States)

    Mohd Zin, Zalhan; Ibrahim, Ratnawati

    2011-06-01

    Digital image processing techniques (DIP) have been widely used in various types of application recently. Classification and recognition of a specific object using vision system require some challenging tasks in the field of image processing and artificial intelligence. The ability and efficiency of vision system to capture and process the images is very important for any intelligent system such as autonomous robot. This paper gives attention to the development of a vision system that could contribute to the development of an automated vision based lawn mower robot. The works involve on the implementation of DIP techniques to detect and recognize three different types of obstacles that usually exist on a football field. The focus was given on the study on different types and sizes of obstacles, the development of vision based obstacle recognition system and the evaluation of the system's performance. Image processing techniques such as image filtering, segmentation, enhancement and edge detection have been applied in the system. The results have shown that the developed system is able to detect and recognize various types of obstacles on a football field with recognition rate of more 80%.

  7. A Vision-Based Approach to Fire Detection

    Directory of Open Access Journals (Sweden)

    Pedro Gomes

    2014-09-01

    Full Text Available This paper presents a vision-based method for fire detection from fixed surveillance smart cameras. The method integrates several well-known techniques properly adapted to cope with the challenges related to the actual deployment of the vision system. Concretely, background subtraction is performed with a context-based learning mechanism so as to attain higher accuracy and robustness. The computational cost of a frequency analysis of potential fire regions is reduced by means of focusing its operation with an attentive mechanism. For fast discrimination between fire regions and fire-coloured moving objects, a new colour-based model of fire's appearance and a new wavelet-based model of fire's frequency signature are proposed. To reduce the false alarm rate due to the presence of fire-coloured moving objects, the category and behaviour of each moving object is taken into account in the decision-making. To estimate the expected object's size in the image plane and to generate geo-referenced alarms, the camera-world mapping is approximated with a GPS-based calibration process. Experimental results demonstrate the ability of the proposed method to detect fires with an average success rate of 93.1% at a processing rate of 10 Hz, which is often sufficient for real-life applications.

  8. Distance based control system for machine vision-based selective spraying

    NARCIS (Netherlands)

    Steward, B.L.; Tian, L.F.; Tang, L.

    2002-01-01

    For effective operation of a selective sprayer with real-time local weed sensing, herbicides must be delivered, accurately to weed targets in the field. With a machine vision-based selective spraying system, acquiring sequential images and switching nozzles on and off at the correct locations are

  9. A Case of Recurrent Transient Monocular Visual Loss after Receiving Sildenafil

    Directory of Open Access Journals (Sweden)

    Asaad Ghanem Ghanem

    2011-01-01

    Full Text Available A 53-year-old man was attended to the Clinic Ophthalmic Center, Mansoura University, Egypt, with recurrent transient monocular visual loss after receiving sildenafil citrate (Viagra for erectile dysfunction. Examination for possible risk factors revealed mild hypercholesterolemia. Family history showed that his father had suffered from bilateral nonarteritic anterior ischemic optic neuropathy (NAION. Physicians might look for arteriosclerotic risk factors and family history of NAION among predisposing risk factors before prescribing sildenafil erectile dysfunction drugs.

  10. The use of contact lens telescopic systems in low vision rehabilitation.

    Science.gov (United States)

    Vincent, Stephen J

    2017-06-01

    Refracting telescopes are afocal compound optical systems consisting of two lenses that produce an apparent magnification of the retinal image. They are routinely used in visual rehabilitation in the form of monocular or binocular hand held low vision aids, and head or spectacle-mounted devices to improve distance visual acuity, and with slight modifications, to enhance acuity for near and intermediate tasks. Since the advent of ground glass haptic lenses in the 1930's, contact lenses have been employed as a useful refracting element of telescopic systems; primarily as a mobile ocular lens (the eyepiece), that moves with the eye. Telescopes which incorporate a contact lens eyepiece significantly improve the weight, comesis, and field of view compared to traditional spectacle-mounted telescopes, in addition to potential related psycho-social benefits. This review summarises the underlying optics and use of contact lenses to provide telescopic magnification from the era of Descartes, to Dallos, and the present day. The limitations and clinical challenges associated with such devices are discussed, along with the potential future use of reflecting telescopes incorporated within scleral lenses and tactile contact lens systems in low vision rehabilitation. Copyright © 2017 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  11. Control of multiple robots using vision sensors

    CERN Document Server

    Aranda, Miguel; Sagüés, Carlos

    2017-01-01

    This monograph introduces novel methods for the control and navigation of mobile robots using multiple-1-d-view models obtained from omni-directional cameras. This approach overcomes field-of-view and robustness limitations, simultaneously enhancing accuracy and simplifying application on real platforms. The authors also address coordinated motion tasks for multiple robots, exploring different system architectures, particularly the use of multiple aerial cameras in driving robot formations on the ground. Again, this has benefits of simplicity, scalability and flexibility. Coverage includes details of: a method for visual robot homing based on a memory of omni-directional images a novel vision-based pose stabilization methodology for non-holonomic ground robots based on sinusoidal-varying control inputs an algorithm to recover a generic motion between two 1-d views and which does not require a third view a novel multi-robot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and c...

  12. Memristive device based learning for navigation in robots.

    Science.gov (United States)

    Sarim, Mohammad; Kumar, Manish; Jha, Rashmi; Minai, Ali A

    2017-11-08

    Biomimetic robots have gained attention recently for various applications ranging from resource hunting to search and rescue operations during disasters. Biological species are known to intuitively learn from the environment, gather and process data, and make appropriate decisions. Such sophisticated computing capabilities in robots are difficult to achieve, especially if done in real-time with ultra-low energy consumption. Here, we present a novel memristive device based learning architecture for robots. Two terminal memristive devices with resistive switching of oxide layer are modeled in a crossbar array to develop a neuromorphic platform that can impart active real-time learning capabilities in a robot. This approach is validated by navigating a robot vehicle in an unknown environment with randomly placed obstacles. Further, the proposed scheme is compared with reinforcement learning based algorithms using local and global knowledge of the environment. The simulation as well as experimental results corroborate the validity and potential of the proposed learning scheme for robots. The results also show that our learning scheme approaches an optimal solution for some environment layouts in robot navigation.

  13. X-Ray Pulsar Based Navigation and Time Determination, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Microcosm will build on the Phase I X-ray pulsar-based navigation and timing (XNAV) feasibility assessment to develop a detailed XNAV simulation capability to...

  14. A spatial registration method for navigation system combining O-arm with spinal surgery robot

    Science.gov (United States)

    Bai, H.; Song, G. L.; Zhao, Y. W.; Liu, X. Z.; Jiang, Y. X.

    2018-05-01

    The minimally invasive surgery in spinal surgery has become increasingly popular in recent years as it reduces the chances of complications during post-operation. However, the procedure of spinal surgery is complicated and the surgical vision of minimally invasive surgery is limited. In order to increase the quality of percutaneous pedicle screw placement, the O-arm that is a mobile intraoperative imaging system is used to assist surgery. The robot navigation system combined with O-arm is also increasing, with the extensive use of O-arm. One of the major problems in the surgical navigation system is to associate the patient space with the intra-operation image space. This study proposes a spatial registration method of spinal surgical robot navigation system, which uses the O-arm to scan a calibration phantom with metal calibration spheres. First, the metal artifacts were reduced in the CT slices and then the circles in the images based on the moments invariant could be identified. Further, the position of the calibration sphere in the image space was obtained. Moreover, the registration matrix is obtained based on the ICP algorithm. Finally, the position error is calculated to verify the feasibility and accuracy of the registration method.

  15. A smart sensor-based vision system: implementation and evaluation

    International Nuclear Information System (INIS)

    Elouardi, A; Bouaziz, S; Dupret, A; Lacassagne, L; Klein, J O; Reynaud, R

    2006-01-01

    One of the methods of solving the computational complexity of image-processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARIS1 (Programmable Analog Retina-like Image Sensor1) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor coupled with a microcontroller. The smart sensor integrates a set of analog and digital computing units. This architecture paves the way for a more compact vision system and increases the performances reducing the data flow exchanges with a microprocessor in control. A system has been implemented as a proof-of-concept and has enabled us to evaluate the performance requirements for a possible integration of a microcontroller on the same chip. The used approach is compared with two architectures implementing CMOS active pixel sensors (APS) and interfaced to the same microcontroller. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations

  16. A smart sensor-based vision system: implementation and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Elouardi, A; Bouaziz, S; Dupret, A; Lacassagne, L; Klein, J O; Reynaud, R [Institute of Fundamental Electronics, Bat. 220, Paris XI University, 91405 Orsay (France)

    2006-04-21

    One of the methods of solving the computational complexity of image-processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARIS1 (Programmable Analog Retina-like Image Sensor1) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor coupled with a microcontroller. The smart sensor integrates a set of analog and digital computing units. This architecture paves the way for a more compact vision system and increases the performances reducing the data flow exchanges with a microprocessor in control. A system has been implemented as a proof-of-concept and has enabled us to evaluate the performance requirements for a possible integration of a microcontroller on the same chip. The used approach is compared with two architectures implementing CMOS active pixel sensors (APS) and interfaced to the same microcontroller. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations.

  17. Stable haptic feedback based on a Dynamic Vision Sensor for Microrobotics.

    OpenAIRE

    Bolopion , Aude; Ni , Zhenjiang; Agnus , Joël; Benosman , Ryad; Régnier , Stéphane

    2012-01-01

    International audience; This work presents a stable vision based haptic feedback for micromanipulation using both an asynchronous Address Event Representation (AER) silicon retina and a conventional frame-based camera. At this scale, most of the grippers used to manipulate objects lack of force sensing. High frequency vision detection thus provides a sound solution to get information about the position of the object and the tool to provide virtual haptic guides. Artificial retinas present hig...

  18. Autonomous Navigation for Autonomous Underwater Vehicles Based on Information Filters and Active Sensing

    Directory of Open Access Journals (Sweden)

    Tianhong Yan

    2011-11-01

    Full Text Available This paper addresses an autonomous navigation method for the autonomous underwater vehicle (AUV C-Ranger applying information-filter-based simultaneous localization and mapping (SLAM, and its sea trial experiments in Tuandao Bay (Shangdong Province, P.R. China. Weak links in the information matrix in an extended information filter (EIF can be pruned to achieve an efficient approach-sparse EIF algorithm (SEIF-SLAM. All the basic update formulae can be implemented in constant time irrespective of the size of the map; hence the computational complexity is significantly reduced. The mechanical scanning imaging sonar is chosen as the active sensing device for the underwater vehicle, and a compensation method based on feedback of the AUV pose is presented to overcome distortion of the acoustic images due to the vehicle motion. In order to verify the feasibility of the navigation methods proposed for the C-Ranger, a sea trial was conducted in Tuandao Bay. Experimental results and analysis show that the proposed navigation approach based on SEIF-SLAM improves the accuracy of the navigation compared with conventional method; moreover the algorithm has a low computational cost when compared with EKF-SLAM.

  19. Autonomous navigation for autonomous underwater vehicles based on information filters and active sensing.

    Science.gov (United States)

    He, Bo; Zhang, Hongjin; Li, Chao; Zhang, Shujing; Liang, Yan; Yan, Tianhong

    2011-01-01

    This paper addresses an autonomous navigation method for the autonomous underwater vehicle (AUV) C-Ranger applying information-filter-based simultaneous localization and mapping (SLAM), and its sea trial experiments in Tuandao Bay (Shangdong Province, P.R. China). Weak links in the information matrix in an extended information filter (EIF) can be pruned to achieve an efficient approach-sparse EIF algorithm (SEIF-SLAM). All the basic update formulae can be implemented in constant time irrespective of the size of the map; hence the computational complexity is significantly reduced. The mechanical scanning imaging sonar is chosen as the active sensing device for the underwater vehicle, and a compensation method based on feedback of the AUV pose is presented to overcome distortion of the acoustic images due to the vehicle motion. In order to verify the feasibility of the navigation methods proposed for the C-Ranger, a sea trial was conducted in Tuandao Bay. Experimental results and analysis show that the proposed navigation approach based on SEIF-SLAM improves the accuracy of the navigation compared with conventional method; moreover the algorithm has a low computational cost when compared with EKF-SLAM.

  20. Grounding Our Vision: Brain Research and Strategic Vision

    Science.gov (United States)

    Walker, Mike

    2011-01-01

    While recognizing the value of "vision," it could be argued that vision alone--at least in schools--is not enough to rally the financial and emotional support required to translate an idea into reality. A compelling vision needs to reflect substantive, research-based knowledge if it is to spark the kind of strategic thinking and insight…

  1. How the Venetian Blind Percept Emergesfrom the Laminar Cortical Dynamics of 3D Vision

    Directory of Open Access Journals (Sweden)

    Stephen eGrossberg

    2014-08-01

    Full Text Available The 3D LAMINART model of 3D vision and figure-ground perception is used to explain and simulate a key example of the Venetian blind effect and show how it is related to other well-known perceptual phenomena such as Panum's limiting case. The model shows how identified neurons that interact in hierarchically organized laminar circuits of the visual cortex can simulate many properties of 3D vision percepts, notably consciously seen surface percepts, which are predicted to arise when filled-in surface representations are integrated into surface-shroud resonances between visual and parietal cortex. The model describes how monocular and binocular oriented filtering interacts with later stages of 3D boundary formation and surface filling-in in the lateral geniculate nucleus (LGN and cortical areas V1, V2, and V4. It proposes how interactions between layers 4, 3B, and 2/3 in V1 and V2 contribute to stereopsis, and how binocular and monocular information combine to form 3D boundary and surface representations. The model suggests how surface-to-boundary feedback from V2 thin stripes to pale stripes enables computationally complementary boundary and surface formation properties to generate a single consistent percept, eliminate redundant 3D boundaries, and trigger figure-ground perception. The model also shows how false binocular boundary matches may be eliminated by Gestalt grouping properties. In particular, a disparity filter, which helps to solve the Correspondence Problem by eliminating false matches, is predicted to be realized as part of the boundary grouping process in layer 2/3 of cortical area V2. The model has been used to simulate the consciously seen 3D surface percepts in 18 psychophysical experiments. These percepts include the Venetian blind effect, Panum's limiting case, contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, stereopsis with polarity

  2. Ground-Based Global Navigation Satellite System Combined Broadcast Ephemeris Data (daily files) from NASA CDDIS

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset consists of ground-based Global Navigation Satellite System (GNSS) Combined Broadcast Ephemeris Data (daily files of all distinct navigation messages...

  3. A Case of Complete Recovery of Fluctuating Monocular Blindness Following Endovascular Treatment in Internal Carotid Artery Dissection.

    Science.gov (United States)

    Kim, Ki-Tae; Baik, Seung Guk; Park, Kyung-Pil; Park, Min-Gyu

    2015-09-01

    Monocular blindness may appear as the first symptom of internal carotid artery dissection (ICAD). However, there have been no reports that monocular visual loss repeatedly occurs and disappears in response to postural change in ICAD. A 33-year-old woman presented with transient monocular blindness (TMB) following acute-onset headache. TMB repeatedly occurred in response to postural change. Two days later, she experienced transient dysarthria and right hemiparesis in upright position. Pupil size and light reflex were normal, but a relative afferent pupillary defect was positive in the left eye. Diffusion-weighted imaging showed no acute lesion, but perfusion-weighted imaging showed perfusion delay in the left ICA territory. Digital subtraction angiography demonstrated a false lumen and an intraluminal filling defect in proximal segment of the left ICA. Carotid stenting was performed urgently. After carotid stenting, left relative afferent pupillary defect disappeared and TMB was not provoked anymore by upright posture. At discharge, left visual acuity was completely normalized. Because fluctuating visual symptoms in the ICAD may be associated with hemodynamically unstable status, assessment of the perfusion status should be done quickly. Carotid stenting may be helpful to improve the fluctuating visual symptoms and hemodynamically unstable status in selected patient with the ICAD. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  4. The Enright phenomenon. Stereoscopic distortion of perceived driving speed induced by monocular pupil dilation.

    Science.gov (United States)

    Carkeet, Andrew; Wood, Joanne M; McNeill, Kylie M; McNeill, Hamish J; James, Joanna A; Holder, Leigh S

    The Enright phenomenon describes the distortion in speed perception experienced by an observer looking sideways from a moving vehicle when viewing with interocular differences in retinal image brightness, usually induced by neutral density filters. We investigated whether the Enright phenomenon could be induced with monocular pupil dilation using tropicamide. We tested 17 visually normal young adults on a closed road driving circuit. Participants were asked to travel at Goal Speeds of 40km/h and 60km/h while looking sideways from the vehicle with: (i) both eyes with undilated pupils; (ii) both eyes with dilated pupils; (iii) with the leading eye only dilated; and (iv) the trailing eye only dilated. For each condition we recorded actual driving speed. With the pupil of the leading eye dilated participants drove significantly faster (by an average of 3.8km/h) than with both eyes dilated (p=0.02); with the trailing eye dilated participants drove significantly slower (by an average of 3.2km/h) than with both eyes dilated (p<0.001). The speed, with the leading eye dilated, was faster by an average of 7km/h than with the trailing eye dilated (p<0.001). There was no significant difference between driving speeds when viewing with both eyes either dilated or undilated (p=0.322). Our results are the first to show a measurable change in driving behaviour following monocular pupil dilation and support predictions based on the Enright phenomenon. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  5. Spatial navigation by congenitally blind individuals.

    Science.gov (United States)

    Schinazi, Victor R; Thrash, Tyler; Chebat, Daniel-Robert

    2016-01-01

    Spatial navigation in the absence of vision has been investigated from a variety of perspectives and disciplines. These different approaches have progressed our understanding of spatial knowledge acquisition by blind individuals, including their abilities, strategies, and corresponding mental representations. In this review, we propose a framework for investigating differences in spatial knowledge acquisition by blind and sighted people consisting of three longitudinal models (i.e., convergent, cumulative, and persistent). Recent advances in neuroscience and technological devices have provided novel insights into the different neural mechanisms underlying spatial navigation by blind and sighted people and the potential for functional reorganization. Despite these advances, there is still a lack of consensus regarding the extent to which locomotion and wayfinding depend on amodal spatial representations. This challenge largely stems from methodological limitations such as heterogeneity in the blind population and terminological ambiguity related to the concept of cognitive maps. Coupled with an over-reliance on potential technological solutions, the field has diffused into theoretical and applied branches that do not always communicate. Here, we review research on navigation by congenitally blind individuals with an emphasis on behavioral and neuroscientific evidence, as well as the potential of technological assistance. Throughout the article, we emphasize the need to disentangle strategy choice and performance when discussing the navigation abilities of the blind population. For further resources related to this article, please visit the WIREs website. © 2015 The Authors. WIREs Cognitive Science published by Wiley Periodicals, Inc.

  6. Exploring User Engagement in Information Networks: Behavioural – based Navigation Modelling, Ideas and Directions

    Directory of Open Access Journals (Sweden)

    Vesna Kumbaroska

    2017-04-01

    Full Text Available Revealing an endless array of user behaviors in an online environment is a very good indicator of the user’s interests either in the process of browsing or in purchasing. One such behavior is the navigation behavior, so detected user navigation patterns are able to be used for practical purposes such as: improving user engagement, turning most browsers into buyers, personalize content or interface, etc. In this regard, our research represents a connection between navigation modelling and user engagement. A usage of the Generalized Stochastic Petri Nets concept for stochastic behavioral-based modelling of the navigation process is proposed for measuring user engagement components. Different types of users are automatically identified and clustered according to their navigation behaviors, thus the developed model gives great insight into the navigation process. As part of this study, Peterson’s model for measuring the user engagement is explored and a direct calculation of its components is illustrated. At the same time, asssuming that several user sessions/visits are initialized in a certain time frame, following the Petri Nets dynamics is indicating that the proposed behavioral – based model could be used for user engagement metrics calculation, thus some basic ideas are discussed, and initial directions are given.

  7. Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation

    Directory of Open Access Journals (Sweden)

    Kailun Yang

    2018-05-01

    Full Text Available Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework.

  8. Support to Academic Based Research on Leadership Vision and Gender Implications

    National Research Council Canada - National Science Library

    Murphy, Sally

    1997-01-01

    .... Support to Academic Based Research on Leadership Vision and Gender Implications suggests that additional scholarly research, including that which can be leveraged by the U.S. Army from academic institutional efforts, is necessary to achieve the vision of the fourth AWC and to support the U.S. Army in its re-engineering efforts.

  9. Application of digital human modeling and simulation for vision analysis of pilots in a jet aircraft: a case study.

    Science.gov (United States)

    Karmakar, Sougata; Pal, Madhu Sudan; Majumdar, Deepti; Majumdar, Dhurjati

    2012-01-01

    Ergonomic evaluation of visual demands becomes crucial for the operators/users when rapid decision making is needed under extreme time constraint like navigation task of jet aircraft. Research reported here comprises ergonomic evaluation of pilot's vision in a jet aircraft in virtual environment to demonstrate how vision analysis tools of digital human modeling software can be used effectively for such study. Three (03) dynamic digital pilot models, representative of smallest, average and largest Indian pilot population were generated from anthropometric database and interfaced with digital prototype of the cockpit in Jack software for analysis of vision within and outside the cockpit. Vision analysis tools like view cones, eye view windows, blind spot area, obscuration zone, reflection zone etc. were employed during evaluation of visual fields. Vision analysis tool was also used for studying kinematic changes of pilot's body joints during simulated gazing activity. From present study, it can be concluded that vision analysis tool of digital human modeling software was found very effective in evaluation of position and alignment of different displays and controls in the workstation based upon their priorities within the visual fields and anthropometry of the targeted users, long before the development of its physical prototype.

  10. Electrophysiological correlates of mental navigation in blind and sighted people.

    Science.gov (United States)

    Kober, Silvia Erika; Wood, Guilherme; Kampl, Christiane; Neuper, Christa; Ischebeck, Anja

    2014-10-15

    The aim of the present study was to investigate functional reorganization of the occipital cortex for a mental navigation task in blind people. Eight completely blind adults and eight sighted matched controls performed a mental navigation task, in which they mentally imagined to walk along familiar routes of their hometown during a multi-channel EEG measurement. A motor imagery task was used as control condition. Furthermore, electrophysiological activation patterns during a resting measurement with open and closed eyes were compared between blind and sighted participants. During the resting measurement with open eyes, no differences in EEG power were observed between groups, whereas sighted participants showed higher alpha (8-12Hz) activity at occipital sites compared to blind participants during an eyes-closed resting condition. During the mental navigation task, blind participants showed a stronger event-related desynchronization in the alpha band over the visual cortex compared to sighted controls indicating a stronger activation in this brain region in the blind. Furthermore, groups showed differences in functional brain connectivity between fronto-central and parietal-occipital brain networks during mental navigation indicating stronger visuo-spatial processing in sighted than in blind people during mental navigation. Differences in electrophysiological parameters between groups were specific for mental navigation since no group differences were observed during motor imagery. These results indicate that in the absence of vision the visual cortex takes over other functions such as spatial navigation. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Laser-based Relative Navigation Using GPS Measurements for Spacecraft Formation Flying

    Science.gov (United States)

    Lee, Kwangwon; Oh, Hyungjik; Park, Han-Earl; Park, Sang-Young; Park, Chandeok

    2015-12-01

    This study presents a precise relative navigation algorithm using both laser and Global Positioning System (GPS) measurements in real time. The measurement model of the navigation algorithm between two spacecraft is comprised of relative distances measured by laser instruments and single differences of GPS pseudo-range measurements in spherical coordinates. Based on the measurement model, the Extended Kalman Filter (EKF) is applied to smooth the pseudo-range measurements and to obtain the relative navigation solution. While the navigation algorithm using only laser measurements might become inaccurate because of the limited accuracy of spacecraft attitude estimation when the distance between spacecraft is rather large, the proposed approach is able to provide an accurate solution even in such cases by employing the smoothed GPS pseudo-range measurements. Numerical simulations demonstrate that the errors of the proposed algorithm are reduced by more than about 12% compared to those of an algorithm using only laser measurements, as the accuracy of angular measurements is greater than 0.001° at relative distances greater than 30 km.

  12. The PPP Precision Analysis Based on BDS Regional Navigation System

    Directory of Open Access Journals (Sweden)

    ZHU Yongxing

    2015-04-01

    Full Text Available BeiDou navigation satellite system(BDS has opened service in most of the Asia-Pacific region, it offers the possibility to break the technological monopoly of GPS in the field of high-precision applications, so its performance of precise point positioning (PPP has been a great concern. Firstly, the constellation of BeiDou regional navigation system and BDS/GPS tracking network is introduced. Secondly, the precise ephemeris and clock offset accuracy of BeiDou satellite based on domestic tracking network is analyzed. Finally, the static and kinematic PPP accuracy is studied, and compared with the GPS. The actual measured numerical example shows that the static and kinematic PPP based on BDS can achieve centimeter-level and decimeter-level respectively, reaching the current level of GPS precise point positioning.

  13. Particle Filter Based Fault-tolerant ROV Navigation using Hydro-acoustic Position and Doppler Velocity Measurements

    DEFF Research Database (Denmark)

    Zhao, Bo; Blanke, Mogens; Skjetne, Roger

    2012-01-01

    This paper presents a fault tolerant navigation system for a remotely operated vehicle (ROV). The navigation system uses hydro-acoustic position reference (HPR) and Doppler velocity log (DVL) measurements to achieve an integrated navigation. The fault tolerant functionality is based on a modied...... particle lter. This particle lter is able to run in an asynchronous manner to accommodate the measurement drop out problem, and it overcomes the measurement outliers by switching observation models. Simulations with experimental data show that this fault tolerant navigation system can accurately estimate...

  14. Evaluation of navigation interfaces in virtual environments

    Science.gov (United States)

    Mestre, Daniel R.

    2014-02-01

    When users are immersed in cave-like virtual reality systems, navigational interfaces have to be used when the size of the virtual environment becomes larger than the physical extent of the cave floor. However, using navigation interfaces, physically static users experience self-motion (visually-induced vection). As a consequence, sensorial incoherence between vision (indicating self-motion) and other proprioceptive inputs (indicating immobility) can make them feel dizzy and disoriented. We tested, in two experimental studies, different locomotion interfaces. The objective was twofold: testing spatial learning and cybersickness. In a first experiment, using first-person navigation with a flystick ®, we tested the effect of sensorial aids, a spatialized sound or guiding arrows on the ground, attracting the user toward the goal of the navigation task. Results revealed that sensorial aids tended to impact negatively spatial learning. Moreover, subjects reported significant levels of cybersickness. In a second experiment, we tested whether such negative effects could be due to poorly controlled rotational motion during simulated self-motion. Subjects used a gamepad, in which rotational and translational displacements were independently controlled by two joysticks. Furthermore, we tested first- versus third-person navigation. No significant difference was observed between these two conditions. Overall, cybersickness tended to be lower, as compared to experiment 1, but the difference was not significant. Future research should evaluate further the hypothesis of the role of passively perceived optical flow in cybersickness, but manipulating the virtual environment'sperrot structure. It also seems that video-gaming experience might be involved in the user's sensitivity to cybersickness.

  15. Molecular Basis for Ultraviolet Vision in Invertebrates

    OpenAIRE

    Salcedo, Ernesto; Zheng, Lijun; Phistry, Meridee; Bagg, Eve E.; Britt, Steven G.

    2003-01-01

    Invertebrates are sensitive to a broad spectrum of light that ranges from UV to red. Color sensitivity in the UV plays an important role in foraging, navigation, and mate selection in both flying and terrestrial invertebrate animals. Here, we show that a single amino acid polymorphism is responsible for invertebrate UV vision. This residue (UV: lysine vs blue:asparagine or glutamate) corresponds to amino acid position glycine 90 (G90) in bovine rhodopsin, a site affected in autosomal dominant...

  16. A nationwide population-based study of low vision and blindness in South Korea.

    Science.gov (United States)

    Park, Shin Hae; Lee, Ji Sung; Heo, Hwan; Suh, Young-Woo; Kim, Seung-Hyun; Lim, Key Hwan; Moon, Nam Ju; Lee, Sung Jin; Park, Song Hee; Baek, Seung-Hee

    2014-12-18

    To investigate the prevalence and associated risk factors of low vision and blindness in the Korean population. This cross-sectional, population-based study examined the ophthalmologic data of 22,135 Koreans aged ≥5 years from the fifth Korea National Health and Nutrition Examination Survey (KNHANES V, 2010-2012). According to the World Health Organization criteria, blindness was defined as visual acuity (VA) less than 20/400 in the better-seeing eye, and low vision as VA of 20/60 or worse but 20/400 or better in the better-seeing eye. The prevalence rates were calculated from either presenting VA (PVA) or best-corrected VA (BCVA). Multivariate regression analysis was conducted for adults aged ≥20 years. The overall prevalence rates of PVA-defined low vision and blindness were 4.98% and 0.26%, respectively, and those of BCVA-defined low vision and blindness were 0.46% and 0.05%, respectively. Prevalence increased rapidly above the age of 70 years. For subjects aged ≥70 years, the population-weighted prevalence rates of low vision, based on PVA and BCVA, were 12.85% and 3.87%, respectively, and the corresponding rates of blindness were 0.49% and 0.42%, respectively. The presenting vision problems were significantly associated with age (younger adults or elderly subjects), female sex, low educational level, and lowest household income, whereas the best-corrected vision problems were associated with age ≥ 70 years, a low educational level, and rural residence. This population-based study provides useful information for planning optimal public eye health care services in South Korea. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.

  17. Fundamentals of Inertial Navigation, Satellite-based Positioning and their Integration

    CERN Document Server

    Noureldin, Aboelmagd; Georgy, Jacques

    2013-01-01

    Fundamentals of Inertial Navigation, Satellite-based Positioning and their Integration is an introduction to the field of Integrated Navigation Systems. It serves as an excellent reference for working engineers as well as textbook for beginners and students new to the area. The book is easy to read and understand with minimum background knowledge. The authors explain the derivations in great detail. The intermediate steps are thoroughly explained so that a beginner can easily follow the material. The book shows a step-by-step implementation of navigation algorithms and provides all the necessary details. It provides detailed illustrations for an easy comprehension. The book also demonstrates real field experiments and in-vehicle road test results with professional discussions and analysis. This work is unique in discussing the different INS/GPS integration schemes in an easy to understand and straightforward way. Those schemes include loosely vs tightly coupled, open loop vs closed loop, and many more.

  18. An Indexing Scheme for Case-Based Manufacturing Vision Development

    DEFF Research Database (Denmark)

    Wang, Chengbo; Johansen, John; Luxhøj, James T.

    2004-01-01

    with the competence improvement of an enterprises manufacturing system. There are two types of cases within the CBRM – an event case (EC) and a general supportive case (GSC). We designed one set of indexing vocabulary for the two types of cases, but a different indexing representation structure for each of them......This paper focuses on one critical element, indexing – retaining and representing knowledge in an applied case-based reasoning (CBR) model for supporting strategic manufacturing vision development (CBRM). Manufacturing vision (MV) is a kind of knowledge management concept and process concerned...

  19. a Variant of Lsd-Slam Capable of Processing High-Speed Low-Framerate Monocular Datasets

    Science.gov (United States)

    Schmid, S.; Fritsch, D.

    2017-11-01

    We develop a new variant of LSD-SLAM, called C-LSD-SLAM, which is capable of performing monocular tracking and mapping in high-speed low-framerate situations such as those of the KITTI datasets. The methods used here are robust against the influence of erronously triangulated points near the epipolar direction, which otherwise causes tracking divergence.

  20. A survey on vision-based human action recognition

    NARCIS (Netherlands)

    Poppe, Ronald Walter

    Vision-based human action recognition is the process of labeling image sequences with action labels. Robust solutions to this problem have applications in domains such as visual surveillance, video retrieval and human–computer interaction. The task is challenging due to variations in motion

  1. Feature Space Dimensionality Reduction for Real-Time Vision-Based Food Inspection

    Directory of Open Access Journals (Sweden)

    Mai Moussa CHETIMA

    2009-03-01

    Full Text Available Machine vision solutions are becoming a standard for quality inspection in several manufacturing industries. In the processed-food industry where the appearance attributes of the product are essential to customer’s satisfaction, visual inspection can be reliably achieved with machine vision. But such systems often involve the extraction of a larger number of features than those actually needed to ensure proper quality control, making the process less efficient and difficult to tune. This work experiments with several feature selection techniques in order to reduce the number of attributes analyzed by a real-time vision-based food inspection system. Identifying and removing as much irrelevant and redundant information as possible reduces the dimensionality of the data and allows classification algorithms to operate faster. In some cases, accuracy on classification can even be improved. Filter-based and wrapper-based feature selectors are experimentally evaluated on different bakery products to identify the best performing approaches.

  2. Obstacle detection by stereo vision of fast correlation matching

    International Nuclear Information System (INIS)

    Jeon, Seung Hoon; Kim, Byung Kook

    1997-01-01

    Mobile robot navigation needs acquiring positions of obstacles in real time. A common method for performing this sensing is through stereo vision. In this paper, indoor images are acquired by binocular vision, which contains various shapes of obstacles. From these stereo image data, in order to obtain distances to obstacles, we must deal with the correspondence problem, or get the region in the other image corresponding to the projection of the same surface region. We present an improved correlation matching method enhancing the speed of arbitrary obstacle detection. The results are faster, simple matching, robustness to noise, and improvement of precision. Experimental results under actual surroundings are presented to reveal the performance. (author)

  3. E-navigation Services for Non-SOLAS Ships

    Directory of Open Access Journals (Sweden)

    Kwang An

    2016-06-01

    Full Text Available It is clearly understood that the main benefits of e-navigation are improved safety and better protection of the environment through the promotion of standards of navigational system and a reduction in human error. In order to meet the expectations on the benefit of e-navigation, e-navigation services should be more focused on non-SOLAS ships. The purpose of this paper is to present necessary e-navigation services for non-SOLAS ships in order to prevent marine accidents in Korean coastal waters. To meet the objectives of the study, an examination on the present navigation and communication system for non-SOLAS ships was performed. Based on the IMO's e-navigation Strategy Implementation Plan (SIP and Korea's national SIP for e-navigation, future trends for the development and implementation of e-navigation were discussed. Consequently, Electronic Navigational Chart (ENC download and ENC up-date service, ENC streaming service, route support service and communication support service based on Maritime Cloud were presented as essential e-navigation services for non-SOLAS ships. This study will help for the planning and designing of the Korean e-navigation system. It is expected that the further researches on the navigation support systems based on e-navigation will be carried out in order to implement the essential e-navigation services for non-SOLAS ships.

  4. Robot navigation in unknown terrains: Introductory survey of non-heuristic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.S.V. [Oak Ridge National Lab., TN (US); Kareti, S.; Shi, Weimin [Old Dominion Univ., Norfolk, VA (US). Dept. of Computer Science; Iyengar, S.S. [Louisiana State Univ., Baton Rouge, LA (US). Dept. of Computer Science

    1993-07-01

    A formal framework for navigating a robot in a geometric terrain by an unknown set of obstacles is considered. Here the terrain model is not a priori known, but the robot is equipped with a sensor system (vision or touch) employed for the purpose of navigation. The focus is restricted to the non-heuristic algorithms which can be theoretically shown to be correct within a given framework of models for the robot, terrain and sensor system. These formulations, although abstract and simplified compared to real-life scenarios, provide foundations for practical systems by highlighting the underlying critical issues. First, the authors consider the algorithms that are shown to navigate correctly without much consideration given to the performance parameters such as distance traversed, etc. Second, they consider non-heuristic algorithms that guarantee bounds on the distance traversed or the ratio of the distance traversed to the shortest path length (computed if the terrain model is known). Then they consider the navigation of robots with very limited computational capabilities such as finite automata, etc.

  5. Vision-Based Interest Point Extraction Evaluation in Multiple Environments

    National Research Council Canada - National Science Library

    McKeehan, Zachary D

    2008-01-01

    Computer-based vision is becoming a primary sensor mechanism in many facets of real world 2-D and 3-D applications, including autonomous robotics, augmented reality, object recognition, motion tracking, and biometrics...

  6. Integration of a 3D perspective view in the navigation display: featuring pilot's mental model

    Science.gov (United States)

    Ebrecht, L.; Schmerwitz, S.

    2015-05-01

    Synthetic vision systems (SVS) appear as spreading technology in the avionic domain. Several studies prove enhanced situational awareness when using synthetic vision. Since the introduction of synthetic vision a steady change and evolution started concerning the primary flight display (PFD) and the navigation display (ND). The main improvements of the ND comprise the representation of colored ground proximity warning systems (EGPWS), weather radar, and TCAS information. Synthetic vision seems to offer high potential to further enhance cockpit display systems. Especially, concerning the current trend having a 3D perspective view in a SVS-PFD while leaving the navigational content as well as methods of interaction unchanged the question arouses if and how the gap between both displays might evolve to a serious problem. This issue becomes important in relation to the transition and combination of strategic and tactical flight guidance. Hence, pros and cons of 2D and 3D views generally as well as the gap between the egocentric perspective 3D view of the PFD and the exocentric 2D top and side view of the ND will be discussed. Further a concept for the integration of a 3D perspective view, i.e., bird's eye view, in synthetic vision ND will be presented. The combination of 2D and 3D views in the ND enables a better correlation of the ND and the PFD. Additionally, this supports the building of pilot's mental model. The authors believe it will improve the situational and spatial awareness. It might prove to further raise the safety margin when operating in mountainous areas.

  7. Performance evaluation of 3D vision-based semi-autonomous control method for assistive robotic manipulator.

    Science.gov (United States)

    Ka, Hyun W; Chung, Cheng-Shiu; Ding, Dan; James, Khara; Cooper, Rory

    2018-02-01

    We developed a 3D vision-based semi-autonomous control interface for assistive robotic manipulators. It was implemented based on one of the most popular commercially available assistive robotic manipulator combined with a low-cost depth-sensing camera mounted on the robot base. To perform a manipulation task with the 3D vision-based semi-autonomous control interface, a user starts operating with a manual control method available to him/her. When detecting objects within a set range, the control interface automatically stops the robot, and provides the user with possible manipulation options through audible text output, based on the detected object characteristics. Then, the system waits until the user states a voice command. Once the user command is given, the control interface drives the robot autonomously until the given command is completed. In the empirical evaluations conducted with human subjects from two different groups, it was shown that the semi-autonomous control can be used as an alternative control method to enable individuals with impaired motor control to more efficiently operate the robot arms by facilitating their fine motion control. The advantage of semi-autonomous control was not so obvious for the simple tasks. But, for the relatively complex real-life tasks, the 3D vision-based semi-autonomous control showed significantly faster performance. Implications for Rehabilitation A 3D vision-based semi-autonomous control interface will improve clinical practice by providing an alternative control method that is less demanding physically as well cognitively. A 3D vision-based semi-autonomous control provides the user with task specific intelligent semiautonomous manipulation assistances. A 3D vision-based semi-autonomous control gives the user the feeling that he or she is still in control at any moment. A 3D vision-based semi-autonomous control is compatible with different types of new and existing manual control methods for ARMs.

  8. The iPod binocular home-based treatment for amblyopia in adults: efficacy and compliance.

    Science.gov (United States)

    Hess, Robert F; Babu, Raiju Jacob; Clavagnier, Simon; Black, Joanna; Bobier, William; Thompson, Benjamin

    2014-09-01

    Occlusion therapy for amblyopia is predicated on the idea that amblyopia is primarily a disorder of monocular vision; however, there is growing evidence that patients with amblyopia have a structurally intact binocular visual system that is rendered functionally monocular due to suppression. Furthermore, we have found that a dichoptic treatment intervention designed to directly target suppression can result in clinically significant improvement in both binocular and monocular visual function in adult patients with amblyopia. The fact that monocular improvement occurs in the absence of any fellow eye occlusion suggests that amblyopia is, in part, due to chronic suppression. Previously the treatment has been administered as a psychophysical task and more recently as a video game that can be played on video goggles or an iPod device equipped with a lenticular screen. The aim of this case-series study of 14 amblyopes (six strabismics, six anisometropes and two mixed) ages 13 to 50 years was to investigate: 1. whether the portable video game treatment is suitable for at-home use and 2. whether an anaglyphic version of the iPod-based video game, which is more convenient for at-home use, has comparable effects to the lenticular version. The dichoptic video game treatment was conducted at home and visual functions assessed before and after treatment. We found that at-home use for 10 to 30 hours restored simultaneous binocular perception in 13 of 14 cases along with significant improvements in acuity (0.11 ± 0.08 logMAR) and stereopsis (0.6 ± 0.5 log units). Furthermore, the anaglyph and lenticular platforms were equally effective. In addition, the iPod devices were able to record a complete and accurate picture of treatment compliance. The home-based dichoptic iPod approach represents a viable treatment for adults with amblyopia. © 2014 The Authors. Clinical and Experimental Optometry © 2014 Optometrists Association Australia.

  9. Automatic Barometric Updates from Ground-Based Navigational Aids

    Science.gov (United States)

    1990-03-12

    ro fAutomatic Barometric Updates US Department from of Transportation Ground-Based Federal Aviation Administration Navigational Aids Office of Safety...tighter vertical spacing controls , particularly for operations near Terminal Control Areas (TCAs), Airport Radar Service Areas (ARSAs), military climb and...E.F., Ruth, J.C., and Williges, B.H. (1987). Speech Controls and Displays. In Salvendy, G., E. Handbook of Human Factors/Ergonomics, New York, John

  10. Ground-Based Global Navigation Satellite System (GNSS) GLONASS Broadcast Ephemeris Data (hourly files) from NASA CDDIS

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset consists of ground-based Global Navigation Satellite System (GNSS) GLObal NAvigation Satellite System (GLONASS) Broadcast Ephemeris Data (hourly files)...

  11. A navigator-based rigid body motion correction for magnetic resonance imaging

    International Nuclear Information System (INIS)

    Ullisch, Marcus Goerge

    2012-01-01

    A novel three-dimensional navigator k-space trajectory for rigid body motion detection for Magnetic Resonance Imaging (MRI) - the Lissajous navigator - was developed and quantitatively compared to the existing spherical navigator trajectory [1]. The spherical navigator cannot sample the complete spherical surface due to slew rate limitations of the scanner hardware. By utilizing a two dimensional Lissajous figure which is projected onto the spherical surface, the Lissajous navigator overcomes this limitation. The complete sampling of the sphere consequently leads to rotation estimates with higher and more isotropic accuracy. Simulations and phantom measurements were performed for both navigators. Both simulations and measurements show a significantly higher overall accuracy of the Lissajous navigator and a higher isotropy of the rotation estimates. Measured under identical conditions with identical postprocessing, the measured mean absolute error of the rotation estimates for the Lissajous navigator was 38% lower (0.3 ) than for the spherical navigator (0.5 ). The maximum error of the Lissajous navigator was reduced by 48% relative to the spherical navigator. The Lissajous navigator delivers higher accuracy of rotation estimation and a higher degree of isotropy than the spherical navigator with no evident drawbacks; these are two decisive advantages, especially for high-resolution anatomical imaging.

  12. A navigator-based rigid body motion correction for magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Ullisch, Marcus Goerge

    2012-01-24

    A novel three-dimensional navigator k-space trajectory for rigid body motion detection for Magnetic Resonance Imaging (MRI) - the Lissajous navigator - was developed and quantitatively compared to the existing spherical navigator trajectory [1]. The spherical navigator cannot sample the complete spherical surface due to slew rate limitations of the scanner hardware. By utilizing a two dimensional Lissajous figure which is projected onto the spherical surface, the Lissajous navigator overcomes this limitation. The complete sampling of the sphere consequently leads to rotation estimates with higher and more isotropic accuracy. Simulations and phantom measurements were performed for both navigators. Both simulations and measurements show a significantly higher overall accuracy of the Lissajous navigator and a higher isotropy of the rotation estimates. Measured under identical conditions with identical postprocessing, the measured mean absolute error of the rotation estimates for the Lissajous navigator was 38% lower (0.3 ) than for the spherical navigator (0.5 ). The maximum error of the Lissajous navigator was reduced by 48% relative to the spherical navigator. The Lissajous navigator delivers higher accuracy of rotation estimation and a higher degree of isotropy than the spherical navigator with no evident drawbacks; these are two decisive advantages, especially for high-resolution anatomical imaging.

  13. Commercial Flight Crew Decision-Making during Low-Visibility Approach Operations Using Fused Synthetic/Enhanced Vision Systems

    Science.gov (United States)

    Kramer, Lynda J.; Bailey, Randall E.; Prinzel, Lawrence J., III

    2007-01-01

    NASA is investigating revolutionary crew-vehicle interface technologies that strive to proactively overcome aircraft safety barriers that would otherwise constrain the full realization of the next-generation air transportation system. A fixed-based piloted simulation experiment was conducted to evaluate the complementary use of Synthetic and Enhanced Vision technologies. Specific focus was placed on new techniques for integration and/or fusion of Enhanced and Synthetic Vision and its impact within a two-crew flight deck on the crew's decision-making process during low-visibility approach and landing operations. Overall, the experimental data showed that significant improvements in situation awareness, without concomitant increases in workload and display clutter, could be provided by the integration and/or fusion of synthetic and enhanced vision technologies for the pilot-flying and the pilot-not-flying. During non-normal operations, the ability of the crew to handle substantial navigational errors and runway incursions were neither improved nor adversely impacted by the display concepts. The addition of Enhanced Vision may not, unto itself, provide an improvement in runway incursion detection without being specifically tailored for this application. Existing enhanced vision system procedures were effectively used in the crew decision-making process during approach and missed approach operations but having to forcibly transition from an excellent FLIR image to natural vision by 100 ft above field level was awkward for the pilot-flying.

  14. A vision based row detection system for sugar beet

    NARCIS (Netherlands)

    Bakker, T.; Wouters, H.; Asselt, van C.J.; Bontsema, J.; Tang, L.; Müller, J.; Straten, van G.

    2008-01-01

    One way of guiding autonomous vehicles through the field is using a vision based row detection system. A new approach for row recognition is presented which is based on grey-scale Hough transform on intelligently merged images resulting in a considerable improvement of the speed of image processing.

  15. Optic disc boundary segmentation from diffeomorphic demons registration of monocular fundus image sequences versus 3D visualization of stereo fundus image pairs for automated early stage glaucoma assessment

    Science.gov (United States)

    Gatti, Vijay; Hill, Jason; Mitra, Sunanda; Nutter, Brian

    2014-03-01

    Despite the current availability in resource-rich regions of advanced technologies in scanning and 3-D imaging in current ophthalmology practice, world-wide screening tests for early detection and progression of glaucoma still consist of a variety of simple tools, including fundus image-based parameters such as CDR (cup to disc diameter ratio) and CAR (cup to disc area ratio), especially in resource -poor regions. Reliable automated computation of the relevant parameters from fundus image sequences requires robust non-rigid registration and segmentation techniques. Recent research work demonstrated that proper non-rigid registration of multi-view monocular fundus image sequences could result in acceptable segmentation of cup boundaries for automated computation of CAR and CDR. This research work introduces a composite diffeomorphic demons registration algorithm for segmentation of cup boundaries from a sequence of monocular images and compares the resulting CAR and CDR values with those computed manually by experts and from 3-D visualization of stereo pairs. Our preliminary results show that the automated computation of CDR and CAR from composite diffeomorphic segmentation of monocular image sequences yield values comparable with those from the other two techniques and thus may provide global healthcare with a cost-effective yet accurate tool for management of glaucoma in its early stage.

  16. Sampling in image space for vision based SLAM

    NARCIS (Netherlands)

    Booij, O.; Zivkovic, Z.; Kröse, B.

    2008-01-01

    Loop closing in vision based SLAM applications is a difficult task. Comparing new image data with all previous image data acquired for the map is practically impossible because of the high computational costs. This problem is part of the bigger problem to acquire local geometric constraints from

  17. Disambiguation of Necker cube rotation by monocular and binocular depth cues: relative effectiveness for establishing long-term bias.

    Science.gov (United States)

    Harrison, Sarah J; Backus, Benjamin T; Jain, Anshul

    2011-05-11

    The apparent direction of rotation of perceptually bistable wire-frame (Necker) cubes can be conditioned to depend on retinal location by interleaving their presentation with cubes that are disambiguated by depth cues (Haijiang, Saunders, Stone, & Backus, 2006; Harrison & Backus, 2010a). The long-term nature of the learned bias is demonstrated by resistance to counter-conditioning on a consecutive day. In previous work, either binocular disparity and occlusion, or a combination of monocular depth cues that included occlusion, internal occlusion, haze, and depth-from-shading, were used to control the rotation direction of disambiguated cubes. Here, we test the relative effectiveness of these two sets of depth cues in establishing the retinal location bias. Both cue sets were highly effective in establishing a perceptual bias on Day 1 as measured by the perceived rotation direction of ambiguous cubes. The effect of counter-conditioning on Day 2, on perceptual outcome for ambiguous cubes, was independent of whether the cue set was the same or different as Day 1. This invariance suggests that a common neural population instantiates the bias for rotation direction, regardless of the cue set used. However, in a further experiment where only disambiguated cubes were presented on Day 1, perceptual outcome of ambiguous cubes during Day 2 counter-conditioning showed that the monocular-only cue set was in fact more effective than disparity-plus-occlusion for causing long-term learning of the bias. These results can be reconciled if the conditioning effect of Day 1 ambiguous trials in the first experiment is taken into account (Harrison & Backus, 2010b). We suggest that monocular disambiguation leads to stronger bias either because it more strongly activates a single neural population that is necessary for perceiving rotation, or because ambiguous stimuli engage cortical areas that are also engaged by monocularly disambiguated stimuli but not by disparity-disambiguated stimuli

  18. Signal- and Symbol-based Representations in Computer Vision

    DEFF Research Database (Denmark)

    Krüger, Norbert; Felsberg, Michael

    We discuss problems of signal-- and symbol based representations in terms of three dilemmas which are faced in the design of each vision system. Signal- and symbol-based representations are opposite ends of a spectrum of conceivable design decisions caught at opposite sides of the dilemmas. We make...... inherent problems explicit and describe potential design decisions for artificial visual systems to deal with the dilemmas....

  19. Towards real-time body pose estimation for presenters in meeting environments

    NARCIS (Netherlands)

    Poppe, Ronald Walter; Heylen, Dirk K.J.; Nijholt, Antinus; Poel, Mannes

    2005-01-01

    This paper describes a computer vision-based approach to body pose estimation. The algorithm can be executed in real-time and processes low resolution, monocular image sequences. A silhouette is extracted and matched against a projection of a 16 DOF human body model. In addition, skin color is used

  20. Neural-network-based depth computation for blind navigation

    Science.gov (United States)

    Wong, Farrah; Nagarajan, Ramachandran R.; Yaacob, Sazali

    2004-12-01

    A research undertaken to help blind people to navigate autonomously or with minimum assistance is termed as "Blind Navigation". In this research, an aid that could help blind people in their navigation is proposed. Distance serves as an important clue during our navigation. A stereovision navigation aid implemented with two digital video cameras that are spaced apart and fixed on a headgear to obtain the distance information is presented. In this paper, a neural network methodology is used to obtain the required parameters of the camera which is known as camera calibration. These parameters are not known but obtained by adjusting the weights in the network. The inputs to the network consist of the matching features in the stereo pair images. A back propagation network with 16-input neurons, 3 hidden neurons and 1 output neuron, which gives depth, is created. The distance information is incorporated into the final processed image as four gray levels such as white, light gray, dark gray and black. Preliminary results have shown that the percentage errors fall below 10%. It is envisaged that the distance provided by neural network shall enable blind individuals to go near and pick up an object of interest.

  1. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  2. Ratbot automatic navigation by electrical reward stimulation based on distance measurement in unknown environments.

    Science.gov (United States)

    Gao, Liqiang; Sun, Chao; Zhang, Chen; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2013-01-01

    Traditional automatic navigation methods for bio-robots are constrained to configured environments and thus can't be applied to tasks in unknown environments. With no consideration of bio-robot's own innate living ability and treating bio-robots in the same way as mechanical robots, those methods neglect the intelligence behavior of animals. This paper proposes a novel ratbot automatic navigation method in unknown environments using only reward stimulation and distance measurement. By utilizing rat's habit of thigmotaxis and its reward-seeking behavior, this method is able to incorporate rat's intrinsic intelligence of obstacle avoidance and path searching into navigation. Experiment results show that this method works robustly and can successfully navigate the ratbot to a target in the unknown environment. This work might put a solid base for application of ratbots and also has significant implication of automatic navigation for other bio-robots as well.

  3. Vision based persistent localization of a humanoid robot for locomotion tasks

    Directory of Open Access Journals (Sweden)

    Martínez Pablo A.

    2016-09-01

    Full Text Available Typical monocular localization schemes involve a search for matches between reprojected 3D world points and 2D image features in order to estimate the absolute scale transformation between the camera and the world. Successfully calculating such transformation implies the existence of a good number of 3D points uniformly distributed as reprojected pixels around the image plane. This paper presents a method to control the march of a humanoid robot towards directions that are favorable for visual based localization. To this end, orthogonal diagonalization is performed on the covariance matrices of both sets of 3D world points and their 2D image reprojections. Experiments with the NAO humanoid platform show that our method provides persistence of localization, as the robot tends to walk towards directions that are desirable for successful localization. Additional tests demonstrate how the proposed approach can be incorporated into a control scheme that considers reaching a target position.

  4. Multi-modal low cost mobile indoor surveillance system on the Robust Artificial Intelligence-based Defense Electro Robot (RAIDER)

    Science.gov (United States)

    Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.

  5. Indoor Pedestrian Navigation Based on Hybrid Route Planning and Location Modeling

    DEFF Research Database (Denmark)

    Schougaard, Kari Rye; Grønbæk, Kaj; Scharling, Tejs

    2012-01-01

    This paper introduces methods and services called PerPosNav for development of custom indoor pedestrian navigation applications to be deployed on a variety of platforms. PerPosNav combines symbolic and geometry based modeling of buildings, and in turn combines graph-based and geometric route...

  6. Lunar Navigation Architecture Design Considerations

    Science.gov (United States)

    D'Souza, Christopher; Getchius, Joel; Holt, Greg; Moreau, Michael

    2009-01-01

    The NASA Constellation Program is aiming to establish a long-term presence on the lunar surface. The Constellation elements (Orion, Altair, Earth Departure Stage, and Ares launch vehicles) will require a lunar navigation architecture for navigation state updates during lunar-class missions. Orion in particular has baselined earth-based ground direct tracking as the primary source for much of its absolute navigation needs. However, due to the uncertainty in the lunar navigation architecture, the Orion program has had to make certain assumptions on the capabilities of such architectures in order to adequately scale the vehicle design trade space. The following paper outlines lunar navigation requirements, the Orion program assumptions, and the impacts of these assumptions to the lunar navigation architecture design. The selection of potential sites was based upon geometric baselines, logistical feasibility, redundancy, and abort support capability. Simulated navigation covariances mapped to entry interface flightpath- angle uncertainties were used to evaluate knowledge errors. A minimum ground station architecture was identified consisting of Goldstone, Madrid, Canberra, Santiago, Hartebeeshoek, Dongora, Hawaii, Guam, and Ascension Island (or the geometric equivalent).

  7. Rotational Kinematics Model Based Adaptive Particle Filter for Robust Human Tracking in Thermal Omnidirectional Vision

    Directory of Open Access Journals (Sweden)

    Yazhe Tang

    2015-01-01

    Full Text Available This paper presents a novel surveillance system named thermal omnidirectional vision (TOV system which can work in total darkness with a wild field of view. Different to the conventional thermal vision sensor, the proposed vision system exhibits serious nonlinear distortion due to the effect of the quadratic mirror. To effectively model the inherent distortion of omnidirectional vision, an equivalent sphere projection is employed to adaptively calculate parameterized distorted neighborhood of an object in the image plane. With the equivalent projection based adaptive neighborhood calculation, a distortion-invariant gradient coding feature is proposed for thermal catadioptric vision. For robust tracking purpose, a rotational kinematic modeled adaptive particle filter is proposed based on the characteristic of omnidirectional vision, which can handle multiple movements effectively, including the rapid motions. Finally, the experiments are given to verify the performance of the proposed algorithm for human tracking in TOV system.

  8. Coupon Test of an Elbow Component by Using Vision-based Measurement System

    International Nuclear Information System (INIS)

    Kim, Sung Wan; Jeon, Bub Gyu; Choi, Hyoung Suk; Kim, Nam Sik

    2016-01-01

    Among the various methods to overcome this shortcoming, vision-based methods to measure the strain of a structure are being proposed and many studies are being conducted on them. The vision-based measurement method is a noncontact method for measuring displacement and strain of objects by comparing between images before and after deformation. This method offers such advantages as no limitations in the surface condition, temperature, and shape of objects, the possibility of full filed measurement, and the possibility of measuring the distribution of stress or defects of structures based on the measurement results of displacement and strain in a map. The strains were measured with various methods using images in coupon test and the measurements were compared. In the future, the validity of the algorithm will be compared using stain gauge and clip gage, and based on the results, the physical properties of materials will be measured using a vision-based measurement system. This will contribute to the evaluation of reliability and effectiveness which are required for investigating local damages

  9. Coupon Test of an Elbow Component by Using Vision-based Measurement System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Wan; Jeon, Bub Gyu; Choi, Hyoung Suk; Kim, Nam Sik [Pusan National University, Busan (Korea, Republic of)

    2016-05-15

    Among the various methods to overcome this shortcoming, vision-based methods to measure the strain of a structure are being proposed and many studies are being conducted on them. The vision-based measurement method is a noncontact method for measuring displacement and strain of objects by comparing between images before and after deformation. This method offers such advantages as no limitations in the surface condition, temperature, and shape of objects, the possibility of full filed measurement, and the possibility of measuring the distribution of stress or defects of structures based on the measurement results of displacement and strain in a map. The strains were measured with various methods using images in coupon test and the measurements were compared. In the future, the validity of the algorithm will be compared using stain gauge and clip gage, and based on the results, the physical properties of materials will be measured using a vision-based measurement system. This will contribute to the evaluation of reliability and effectiveness which are required for investigating local damages.

  10. Digital waterway construction based on inland electronic navigation chart

    Science.gov (United States)

    Wang, Xue; Pan, Junfeng; Zhu, Weiwei

    2015-12-01

    With advantages of large capacity, long distance, low energy consumption, low cost, less land occupation and light pollution, inland waterway transportation becomes one of the most important constituents of the comprehensive transportation system and comprehensive water resources utilization in China. As one of "three elements" of navigation, waterway is the important basis for the development of water transportation and plays a key supporting role in shipping economic. The paper discuss how to realize the informatization and digitization of waterway management based on constructing an integrated system of standard inland electronic navigation chart production, waterway maintenance, navigation mark remote sensing and control, ship dynamic management, and water level remote sensing and report, which can also be the foundation of the intelligent waterway construction. Digital waterway construction is an information project and also has a practical meaning for waterway. It can not only meet the growing high assurance and security requirements for waterway, but also play a significant advantage in improving transport efficiency, reducing costs, promoting energy conservation and so on. This study lays a solid foundation on realizing intelligent waterway and building a smooth, efficient, safe, green modern inland waterway system, and must be considered as an unavoidable problem for the coordinated development between "low carbon" transportation and social economic.

  11. Contrast masking in strabismic amblyopia: attenuation, noise, interocular suppression and binocular summation.

    Science.gov (United States)

    Baker, Daniel H; Meese, Tim S; Hess, Robert F

    2008-07-01

    To investigate amblyopic contrast vision at threshold and above we performed pedestal-masking (contrast discrimination) experiments with a group of eight strabismic amblyopes using horizontal sinusoidal gratings (mainly 3c/deg) in monocular, binocular and dichoptic configurations balanced across eye (i.e. five conditions). With some exceptions in some observers, the four main results were as follows. (1) For the monocular and dichoptic conditions, sensitivity was less in the amblyopic eye than in the good eye at all mask contrasts. (2) Binocular and monocular dipper functions superimposed in the good eye. (3) Monocular masking functions had a normal dipper shape in the good eye, but facilitation was diminished in the amblyopic eye. (4) A less consistent result was normal facilitation in dichoptic masking when testing the good eye, but a loss of this when testing the amblyopic eye. This pattern of amblyopic results was replicated in a normal observer by placing a neutral density filter in front of one eye. The two-stage model of binocular contrast gain control [Meese, T.S., Georgeson, M.A. & Baker, D.H. (2006). Binocular contrast vision at and above threshold. Journal of Vision 6, 1224-1243.] was 'lesioned' in several ways to assess the form of the amblyopic deficit. The most successful model involves attenuation of signal and an increase in noise in the amblyopic eye, and intact stages of interocular suppression and binocular summation. This implies a behavioural influence from monocular noise in the amblyopic visual system as well as in normal observers with an ND filter over one eye.

  12. Developing a vision and strategic action plan for future community-based residency training.

    Science.gov (United States)

    Skelton, Jann B; Owen, James A

    2016-01-01

    The Community Pharmacy Residency Program (CPRP) Planning Committee convened to develop a vision and a strategic action plan for the advancement of community pharmacy residency training. Aligned with the profession's efforts to achieve provider status and expand access to care, the Future Vision and Action Plan for Community-based Residency Training will provide guidance, direction, and a strategic action plan for community-based residency training to ensure that the future needs of community-based pharmacist practitioners are met. National thought leaders, selected because of their leadership in pharmacy practice, academia, and residency training, served on the planning committee. The committee conducted a series of conference calls and an in-person strategic planning meeting held on January 13-14, 2015. Outcomes from the discussions were supplemented with related information from the literature. Results of a survey of CPRP directors and preceptors also informed the planning process. The vision and strategic action plan for community-based residency training is intended to advance training to meet the emerging needs of patients in communities that are served by the pharmacy profession. The group anticipated the advanced skills required of pharmacists serving as community-based pharmacist practitioners and the likely education, training and competencies required by future residency graduates in order to deliver these services. The vision reflects a transformation of community residency training, from CPRPs to community-based residency training, and embodies the concept that residency training should be primarily focused on training the individual pharmacist practitioner based on the needs of patients served within the community, and not on the physical location where pharmacy services are provided. The development of a vision statement, core values statements, and strategic action plan will provide support, guidance, and direction to the profession of pharmacy to

  13. NAVIGATION IN LARGE-FORMAT BUILDINGS BASED ON RFID SENSORS AND QR AND AR MARKERS

    Directory of Open Access Journals (Sweden)

    Tomasz Szymczyk

    2016-09-01

    Full Text Available The authors address the problem of passive navigation in large buildings. Based on the example of several interconnected buildings housing departments of the Lublin University of Technology, as well as the conceptual navigation system, the paper presents one of the possible ways of leading the user from the entrance of the building to a particular room. An analysis of different types of users is made and different (best for them ways of navigating the intricate corridors are proposed. Three ways of user localisation are suggested: RFID, AR and QR markers. A graph of connections between specific rooms was made and weights proposed, representing “the difficulty of covering a given distance”. In the process of navigation Dijkstra’s algorithm was used. The road is indicated as multimedia information: a voice-over or animated arrow showing the direction displayed on the smart phone screen with proprietary software installed. It is also possible to inform the user of the position of the location in which he currently is, based on the static information stored in the QR code.

  14. Monocular perceptual learning of contrast detection facilitates binocular combination in adults with anisometropic amblyopia

    OpenAIRE

    Chen, Zidong; Li, Jinrong; Liu, Jing; Cai, Xiaoxiao; Yuan, Junpeng; Deng, Daming; Yu, Minbin

    2016-01-01

    Perceptual learning in contrast detection improves monocular visual function in adults with anisometropic amblyopia; however, its effect on binocular combination remains unknown. Given that the amblyopic visual system suffers from pronounced binocular functional loss, it is important to address how the amblyopic visual system responds to such training strategies under binocular viewing conditions. Anisometropic amblyopes (n?=?13) were asked to complete two psychophysical supra-threshold binoc...

  15. Remote media vision-based computer input device

    Science.gov (United States)

    Arabnia, Hamid R.; Chen, Ching-Yi

    1991-11-01

    In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.

  16. Peripheral vision benefits spatial learning by guiding eye movements.

    Science.gov (United States)

    Yamamoto, Naohide; Philbeck, John W

    2013-01-01

    The loss of peripheral vision impairs spatial learning and navigation. However, the mechanisms underlying these impairments remain poorly understood. One advantage of having peripheral vision is that objects in an environment are easily detected and readily foveated via eye movements. The present study examined this potential benefit of peripheral vision by investigating whether competent performance in spatial learning requires effective eye movements. In Experiment 1, participants learned room-sized spatial layouts with or without restriction on direct eye movements to objects. Eye movements were restricted by having participants view the objects through small apertures in front of their eyes. Results showed that impeding effective eye movements made subsequent retrieval of spatial memory slower and less accurate. The small apertures also occluded much of the environmental surroundings, but the importance of this kind of occlusion was ruled out in Experiment 2 by showing that participants exhibited intact learning of the same spatial layouts when luminescent objects were viewed in an otherwise dark room. Together, these findings suggest that one of the roles of peripheral vision in spatial learning is to guide eye movements, highlighting the importance of spatial information derived from eye movements for learning environmental layouts.

  17. Improving Real World Performance of Vision Aided Navigation in a Flight Environment

    Science.gov (United States)

    2016-09-15

    alternatives for military systems by leveraging information from non-navigation sen- sors that are already deployed on fielded platforms. The motivation of...plane x and y residuals are provided in the title. Each color represents residuals from a specific image. . . . . . . . . . . . . 97 xi Figure Page 29...are provided in the title. Each color represents residuals from a specific image. . . . . . . . . . . . . 98 30. Illustration of landmark database

  18. Robust object tracking techniques for vision-based 3D motion analysis applications

    Science.gov (United States)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  19. Examining care navigation: librarian participation in a team-based approach?

    Science.gov (United States)

    Nix, A Tyler; Huber, Jeffrey T; Shapiro, Robert M; Pfeifle, Andrea

    2016-04-01

    This study investigated responsibilities, skill sets, degrees, and certifications required of health care navigators in order to identify areas of potential overlap with health sciences librarianship. The authors conducted a content analysis of health care navigator position announcements and developed and assigned forty-eight category terms to represent the sample's responsibilities and skill sets. Coordination of patient care and a bachelor's degree were the most common responsibility and degree requirements, respectively. Results also suggest that managing and providing health information resources is an area of overlap between health care navigators and health sciences librarians, and that librarians are well suited to serve on navigation teams. Such overlap may provide an avenue for collaboration between navigators and health sciences librarians.

  20. Iconic memory-based omnidirectional route panorama navigation.

    Science.gov (United States)

    Yagi, Yasushi; Imai, Kousuke; Tsuji, Kentaro; Yachida, Masahiko

    2005-01-01

    A route navigation method for a mobile robot with an omnidirectional image sensor is described. The route is memorized from a series of consecutive omnidirectional images of the horizon when the robot moves to its goal. While the robot is navigating to the goal point, input is matched against the memorized spatio-temporal route pattern by using dual active contour models and the exact robot position and orientation is estimated from the converged shape of the active contour models.

  1. A New Algorithm for ABS/GPS Integration Based on Fuzzy-Logic in Vehicle Navigation System

    Directory of Open Access Journals (Sweden)

    Ali Amin Zadeh

    2011-10-01

    Full Text Available GPS based vehicle navigation systems have difficulties in tracking vehicles in urban canyons due to poor satellite availability. ABS (Antilock Brake System Navigation System consists of self-contained optical encoders mounted on vehicle wheels that can continuously provide accurate short-term positioning information. In this paper, a new concept regarding GPS/ABS integration, based on Fuzzy Logic is presented. The proposed algorithm is used to identify GPS position accuracy based on environment and vehicle dynamic knowledge. The GPS is used as reference during the time it is in a good condition and replaced by ABS positioning system when GPS information is unreliable. We compare our proposed algorithm with other common algorithm in real environment. Our results show that the proposed algorithm can significantly improve the stability and reliability of ABS/GPS navigation system.

  2. Automated Functional Testing based on the Navigation of Web Applications

    Directory of Open Access Journals (Sweden)

    Boni García

    2011-08-01

    Full Text Available Web applications are becoming more and more complex. Testing such applications is an intricate hard and time-consuming activity. Therefore, testing is often poorly performed or skipped by practitioners. Test automation can help to avoid this situation. Hence, this paper presents a novel approach to perform automated software testing for web applications based on its navigation. On the one hand, web navigation is the process of traversing a web application using a browser. On the other hand, functional requirements are actions that an application must do. Therefore, the evaluation of the correct navigation of web applications results in the assessment of the specified functional requirements. The proposed method to perform the automation is done in four levels: test case generation, test data derivation, test case execution, and test case reporting. This method is driven by three kinds of inputs: i UML models; ii Selenium scripts; iii XML files. We have implemented our approach in an open-source testing framework named Automatic Testing Platform. The validation of this work has been carried out by means of a case study, in which the target is a real invoice management system developed using a model-driven approach.

  3. Marine and Hydrokinetic Renewable Energy Technologies: Potential Navigational Impacts and Mitigation Measures

    Energy Technology Data Exchange (ETDEWEB)

    Cool, Richard, M.; Hudon, Thomas, J.; Basco, David, R.; Rondorf, Neil, E.

    2009-12-10

    On April 15, 2008, the Department of Energy (DOE) issued a Funding Opportunity Announcement for Advanced Water Power Projects which included a Topic Area for Marine and Hydrokinetic Renewable Energy Market Acceleration Projects. Within this Topic Area, DOE identified potential navigational impacts of marine and hydrokinetic renewable energy technologies and measures to prevent adverse impacts on navigation as a sub-topic area. DOE defines marine and hydrokinetic technologies as those capable of utilizing one or more of the following resource categories for energy generation: ocean waves; tides or ocean currents; free flowing water in rivers or streams; and energy generation from the differentials in ocean temperature. PCCI was awarded Cooperative Agreement DE-FC36-08GO18177 from the DOE to identify the potential navigational impacts and mitigation measures for marine hydrokinetic technologies, as summarized herein. The contract also required cooperation with the U.S. Coast Guard (USCG) and two recipients of awards (Pacific Energy Ventures and reVision) in a sub-topic area to develop a protocol to identify streamlined, best-siting practices. Over the period of this contract, PCCI and our sub-consultants, David Basco, Ph.D., and Neil Rondorf of Science Applications International Corporation, met with USCG headquarters personnel, with U.S. Army Corps of Engineers headquarters and regional personnel, with U.S. Navy regional personnel and other ocean users in order to develop an understanding of existing practices for the identification of navigational impacts that might occur during construction, operation, maintenance, and decommissioning. At these same meetings, “standard” and potential mitigation measures were discussed so that guidance could be prepared for project developers. Concurrently, PCCI reviewed navigation guidance published by the USCG and international community. This report summarizes the results of this effort, provides guidance in the form of a

  4. Stereo using monocular cues within the tensor voting framework.

    Science.gov (United States)

    Mordohai, Philippos; Medioni, Gérard

    2006-06-01

    We address the fundamental problem of matching in two static images. The remaining challenges are related to occlusion and lack of texture. Our approach addresses these difficulties within a perceptual organization framework, considering both binocular and monocular cues. Initially, matching candidates for all pixels are generated by a combination of matching techniques. The matching candidates are then embedded in disparity space, where perceptual organization takes place in 3D neighborhoods and, thus, does not suffer from problems associated with scanline or image neighborhoods. The assumption is that correct matches produce salient, coherent surfaces, while wrong ones do not. Matching candidates that are consistent with the surfaces are kept and grouped into smooth layers. Thus, we achieve surface segmentation based on geometric and not photometric properties. Surface overextensions, which are due to occlusion, can be corrected by removing matches whose projections are not consistent in color with their neighbors of the same surface in both images. Finally, the projections of the refined surfaces on both images are used to obtain disparity hypotheses for unmatched pixels. The final disparities are selected after a second tensor voting stage, during which information is propagated from more reliable pixels to less reliable ones. We present results on widely used benchmark stereo pairs.

  5. Monocular oral reading after treatment of dense congenital unilateral cataract

    Science.gov (United States)

    Birch, Eileen E.; Cheng, Christina; Christina, V; Stager, David R.

    2010-01-01

    Background Good long-term visual acuity outcomes for children with dense congenital unilateral cataracts have been reported following early surgery and good compliance with postoperative amblyopia therapy. However, treated eyes rarely achieve normal visual acuity and there has been no formal evaluation of the utility of the treated eye for reading. Methods Eighteen children previously treated for dense congenital unilateral cataract were tested monocularly with the Gray Oral Reading Test, 4th edition (GORT-4) at 7 to 13 years of age using two passages for each eye, one at grade level and one at +1 above grade level. In addition, right eyes of 55 normal children age 7 to 13 served as a control group. The GORT-4 assesses reading rate, accuracy, fluency, and comprehension. Results Visual acuity of treated eyes ranged from 0.1 to 2.0 logMAR and of fellow eyes from −0.1 to 0.2 logMAR. Treated eyes scored significantly lower than fellow and normal control eyes on all scales at grade level and at +1 above grade level. Monocular reading rate, accuracy, fluency, and comprehension were correlated with visual acuity of treated eyes (rs = −0.575 to −0.875, p < 0.005). Treated eyes with 0.1-0.3 logMAR visual acuity did not differ from fellow or normal control eyes in rate, accuracy, fluency, or comprehension when reading at grade level or at +1 above grade level. Fellow eyes did not differ from normal controls on any reading scale. Conclusions Excellent visual acuity outcomes following treatment of dense congenital unilateral cataracts are associated with normal reading ability of the treated eye in school-age children. PMID:20603057

  6. Solar-based navigation for robotic explorers

    Science.gov (United States)

    Shillcutt, Kimberly Jo

    2000-12-01

    This thesis introduces the application of solar position and shadowing information to robotic exploration. Power is a critical resource for robots with remote, long-term missions, so this research focuses on the power generation capabilities of robotic explorers during navigational tasks, in addition to power consumption. Solar power is primarily considered, with the possibility of wind power also contemplated. Information about the environment, including the solar ephemeris, terrain features, time of day, and surface location, is incorporated into a planning structure, allowing robots to accurately predict shadowing and thus potential costs and gains during navigational tasks. By evaluating its potential to generate and expend power, a robot can extend its lifetime and accomplishments. The primary tasks studied are coverage patterns, with a variety of plans developed for this research. The use of sun, terrain and temporal information also enables new capabilities of identifying and following sun-synchronous and sun-seeking paths. Digital elevation maps are combined with an ephemeris algorithm to calculate the altitude and azimuth of the sun from surface locations, and to identify and map shadows. Solar navigation path simulators use this information to perform searches through two-dimensional space, while considering temporal changes. Step by step simulations of coverage patterns also incorporate time in addition to location. Evaluations of solar and wind power generation, power consumption, area coverage, area overlap, and time are generated for sets of coverage patterns, with on-board environmental information linked to the simulations. This research is implemented on the Nomad robot for the Robotic Antarctic Meteorite Search. Simulators have been developed for coverage pattern tests, as well as for sun-synchronous and sun-seeking path searches. Results of field work and simulations are reported and analyzed, with demonstrated improvements in efficiency

  7. Formal safety assessment based on relative risks model in ship navigation

    Energy Technology Data Exchange (ETDEWEB)

    Hu Shenping [Merchant Marine College, Shanghai Maritime University, 1550, Pudong Dadao, Shanghai 200135 (China)]. E-mail: sphu@mmc.shmtu.edu.cn; Fang Quangen [Merchant Marine College, Shanghai Maritime University, 1550, Pudong Dadao, Shanghai 200135 (China)]. E-mail: qgfang@mmc.shmtu.edu.cn; Xia Haibo [Merchant Marine College, Shanghai Maritime University, 1550, Pudong Dadao, Shanghai 200135 (China)]. E-mail: hbxia@mmc.shmtu.edu.cn; Xi Yongtao [Merchant Marine College, Shanghai Maritime University, 1550, Pudong Dadao, Shanghai 200135 (China)]. E-mail: xiyt@mmc.shmtu.edu.cn

    2007-03-15

    Formal safety assessment (FSA) is a structured and systematic methodology aiming at enhancing maritime safety. It has been gradually and broadly used in the shipping industry nowadays around the world. On the basis of analysis and conclusion of FSA approach, this paper discusses quantitative risk assessment and generic risk model in FSA, especially frequency and severity criteria in ship navigation. Then it puts forward a new model based on relative risk assessment (MRRA). The model presents a risk-assessment approach based on fuzzy functions and takes five factors into account, including detailed information about accident characteristics. It has already been used for the assessment of pilotage safety in Shanghai harbor, China. Consequently, it can be proved that MRRA is a useful method to solve the problems in the risk assessment of ship navigation safety in practice.

  8. Formal safety assessment based on relative risks model in ship navigation

    International Nuclear Information System (INIS)

    Hu Shenping; Fang Quangen; Xia Haibo; Xi Yongtao

    2007-01-01

    Formal safety assessment (FSA) is a structured and systematic methodology aiming at enhancing maritime safety. It has been gradually and broadly used in the shipping industry nowadays around the world. On the basis of analysis and conclusion of FSA approach, this paper discusses quantitative risk assessment and generic risk model in FSA, especially frequency and severity criteria in ship navigation. Then it puts forward a new model based on relative risk assessment (MRRA). The model presents a risk-assessment approach based on fuzzy functions and takes five factors into account, including detailed information about accident characteristics. It has already been used for the assessment of pilotage safety in Shanghai harbor, China. Consequently, it can be proved that MRRA is a useful method to solve the problems in the risk assessment of ship navigation safety in practice

  9. Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles

    Science.gov (United States)

    Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick

    2012-01-01

    Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.

  10. Computer vision based nacre thickness measurement of Tahitian pearls

    Science.gov (United States)

    Loesdau, Martin; Chabrier, Sébastien; Gabillon, Alban

    2017-03-01

    The Tahitian Pearl is the most valuable export product of French Polynesia contributing with over 61 million Euros to more than 50% of the total export income. To maintain its excellent reputation on the international market, an obligatory quality control for every pearl deemed for exportation has been established by the local government. One of the controlled quality parameters is the pearls nacre thickness. The evaluation is currently done manually by experts that are visually analyzing X-ray images of the pearls. In this article, a computer vision based approach to automate this procedure is presented. Even though computer vision based approaches for pearl nacre thickness measurement exist in the literature, the very specific features of the Tahitian pearl, namely the large shape variety and the occurrence of cavities, have so far not been considered. The presented work closes the. Our method consists of segmenting the pearl from X-ray images with a model-based approach, segmenting the pearls nucleus with an own developed heuristic circle detection and segmenting possible cavities with region growing. Out of the obtained boundaries, the 2-dimensional nacre thickness profile can be calculated. A certainty measurement to consider imaging and segmentation imprecisions is included in the procedure. The proposed algorithms are tested on 298 manually evaluated Tahitian pearls, showing that it is generally possible to automatically evaluate the nacre thickness of Tahitian pearls with computer vision. Furthermore the results show that the automatic measurement is more precise and faster than the manual one.

  11. A Secure and Privacy-Preserving Navigation Scheme Using Spatial Crowdsourcing in Fog-Based VANETs

    Science.gov (United States)

    Wang, Lingling; Liu, Guozhu; Sun, Lijun

    2017-01-01

    Fog-based VANETs (Vehicular ad hoc networks) is a new paradigm of vehicular ad hoc networks with the advantages of both vehicular cloud and fog computing. Real-time navigation schemes based on fog-based VANETs can promote the scheme performance efficiently. In this paper, we propose a secure and privacy-preserving navigation scheme by using vehicular spatial crowdsourcing based on fog-based VANETs. Fog nodes are used to generate and release the crowdsourcing tasks, and cooperatively find the optimal route according to the real-time traffic information collected by vehicles in their coverage areas. Meanwhile, the vehicle performing the crowdsourcing task can get a reasonable reward. The querying vehicle can retrieve the navigation results from each fog node successively when entering its coverage area, and follow the optimal route to the next fog node until it reaches the desired destination. Our scheme fulfills the security and privacy requirements of authentication, confidentiality and conditional privacy preservation. Some cryptographic primitives, including the Elgamal encryption algorithm, AES, randomized anonymous credentials and group signatures, are adopted to achieve this goal. Finally, we analyze the security and the efficiency of the proposed scheme. PMID:28338620

  12. A Secure and Privacy-Preserving Navigation Scheme Using Spatial Crowdsourcing in Fog-Based VANETs.

    Science.gov (United States)

    Wang, Lingling; Liu, Guozhu; Sun, Lijun

    2017-03-24

    Fog-based VANETs (Vehicular ad hoc networks) is a new paradigm of vehicular ad hoc networks with the advantages of both vehicular cloud and fog computing. Real-time navigation schemes based on fog-based VANETs can promote the scheme performance efficiently. In this paper, we propose a secure and privacy-preserving navigation scheme by using vehicular spatial crowdsourcing based on fog-based VANETs. Fog nodes are used to generate and release the crowdsourcing tasks, and cooperatively find the optimal route according to the real-time traffic information collected by vehicles in their coverage areas. Meanwhile, the vehicle performing the crowdsourcing task can get a reasonable reward. The querying vehicle can retrieve the navigation results from each fog node successively when entering its coverage area, and follow the optimal route to the next fog node until it reaches the desired destination. Our scheme fulfills the security and privacy requirements of authentication, confidentiality and conditional privacy preservation. Some cryptographic primitives, including the Elgamal encryption algorithm, AES, randomized anonymous credentials and group signatures, are adopted to achieve this goal. Finally, we analyze the security and the efficiency of the proposed scheme.

  13. Comparison of three filters in asteroid-based autonomous navigation

    International Nuclear Information System (INIS)

    Cui Wen; Zhu Kai-Jian

    2014-01-01

    At present, optical autonomous navigation has become a key technology in deep space exploration programs. Recent studies focus on the problem of orbit determination using autonomous navigation, and the choice of filter is one of the main issues. To prepare for a possible exploration mission to Mars, the primary emphasis of this paper is to evaluate the capability of three filters, the extended Kalman filter (EKF), unscented Kalman filter (UKF) and weighted least-squares (WLS) algorithm, which have different initial states during the cruise phase. One initial state is assumed to have high accuracy with the support of ground tracking when autonomous navigation is operating; for the other state, errors are set to be large without this support. In addition, the method of selecting asteroids that can be used for navigation from known lists of asteroids to form a sequence is also presented in this study. The simulation results show that WLS and UKF should be the first choice for optical autonomous navigation during the cruise phase to Mars

  14. A Leapfrog Navigation System

    Science.gov (United States)

    Opshaug, Guttorm Ringstad

    There are times and places where conventional navigation systems, such as the Global Positioning System (GPS), are unavailable due to anything from temporary signal occultations to lack of navigation system infrastructure altogether. The goal of the Leapfrog Navigation System (LNS) is to provide localized positioning services for such cases. The concept behind leapfrog navigation is to advance a group of navigation units teamwise into an area of interest. In a practical 2-D case, leapfrogging assumes known initial positions of at least two currently stationary navigation units. Two or more mobile units can then start to advance into the area of interest. The positions of the mobiles are constantly being calculated based on cross-range distance measurements to the stationary units, as well as cross-ranges among the mobiles themselves. At some point the mobile units stop, and the stationary units are released to move. This second team of units (now mobile) can then overtake the first team (now stationary) and travel even further towards the common goal of the group. Since there always is one stationary team, the position of any unit can be referenced back to the initial positions. Thus, LNS provides absolute positioning. I developed navigation algorithms needed to solve leapfrog positions based on cross-range measurements. I used statistical tools to predict how position errors would grow as a function of navigation unit geometry, cross-range measurement accuracy and previous position errors. Using this knowledge I predicted that a 4-unit Leapfrog Navigation System using 100 m baselines and 200 m leap distances could travel almost 15 km before accumulating absolute position errors of 10 m (1sigma). Finally, I built a prototype leapfrog navigation system using 4 GPS transceiver ranging units. I placed the 4 units in the vertices a 10m x 10m square, and leapfrogged the group 20 meters forwards, and then back again (40 m total travel). Average horizontal RMS position

  15. Rapid matching of stereo vision based on fringe projection profilometry

    Science.gov (United States)

    Zhang, Ruihua; Xiao, Yi; Cao, Jian; Guo, Hongwei

    2016-09-01

    As the most important core part of stereo vision, there are still many problems to solve in stereo matching technology. For smooth surfaces on which feature points are not easy to extract, this paper adds a projector into stereo vision measurement system based on fringe projection techniques, according to the corresponding point phases which extracted from the left and right camera images are the same, to realize rapid matching of stereo vision. And the mathematical model of measurement system is established and the three-dimensional (3D) surface of the measured object is reconstructed. This measurement method can not only broaden application fields of optical 3D measurement technology, and enrich knowledge achievements in the field of optical 3D measurement, but also provide potential possibility for the commercialized measurement system in practical projects, which has very important scientific research significance and economic value.

  16. Computer vision techniques for rotorcraft low-altitude flight

    Science.gov (United States)

    Sridhar, Banavar; Cheng, Victor H. L.

    1988-01-01

    A description is given of research that applies techniques from computer vision to automation of rotorcraft navigation. The effort emphasizes the development of a methodology for detecting the ranges to obstacles in the region of interest based on the maximum utilization of passive sensors. The range map derived from the obstacle detection approach can be used as obstacle data for the obstacle avoidance in an automataic guidance system and as advisory display to the pilot. The lack of suitable flight imagery data, however, presents a problem in the verification of concepts for obstacle detection. This problem is being addressed by the development of an adequate flight database and by preprocessing of currently available flight imagery. Some comments are made on future work and how research in this area relates to the guidance of other autonomous vehicles.

  17. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Science.gov (United States)

    Jiang, Yanhua; Xiong, Guangming; Chen, Huiyan; Lee, Dah-Jye

    2014-01-01

    This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC) scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments. PMID:25256109

  18. Incorporating a Wheeled Vehicle Model in a New Monocular Visual Odometry Algorithm for Dynamic Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Yanhua Jiang

    2014-09-01

    Full Text Available This paper presents a monocular visual odometry algorithm that incorporates a wheeled vehicle model for ground vehicles. The main innovation of this algorithm is to use the single-track bicycle model to interpret the relationship between the yaw rate and side slip angle, which are the two most important parameters that describe the motion of a wheeled vehicle. Additionally, the pitch angle is also considered since the planar-motion hypothesis often fails due to the dynamic characteristics of wheel suspensions and tires in real-world environments. Linearization is used to calculate a closed-form solution of the motion parameters that works as a hypothesis generator in a RAndom SAmple Consensus (RANSAC scheme to reduce the complexity in solving equations involving trigonometric. All inliers found are used to refine the winner solution through minimizing the reprojection error. Finally, the algorithm is applied to real-time on-board visual localization applications. Its performance is evaluated by comparing against the state-of-the-art monocular visual odometry methods using both synthetic data and publicly available datasets over several kilometers in dynamic outdoor environments.

  19. Browse Title Index

    African Journals Online (AJOL)

    Items 51 - 73 of 73 ... Journal Home > Advanced Search > Browse Title Index ... Vol 13 (2006), The ageing eye” functional changes from cradle to gray: A ... Vol 12 (2005), The evaluation of vision in children using monocular vision acuity and ...

  20. Navigating Evidence-Based Practice Projects: The Faculty Role.

    Science.gov (United States)

    Moch, Susan D; Quinn-Lee, Lisa; Gallegos, Cara; Sortedahl, Charlotte K

    : An innovative way to facilitate evidence-based practice (EBP) learning and to get evidence into practice is through academic-clinical agency projects involving faculty, undergraduate students, and agency staff. The central role of the faculty is key to successful academic-clinical agency partnerships. Faculty navigate the often difficult process of focusing students and engaging busy staff through initiating, maintaining, and evaluating projects. Students learn valuable EBP skills, staff become engaged in EBP, and the projects are rated highly by agency administrators.

  1. Rehabilitation of patients with motor disabilities using computer vision based techniques

    Directory of Open Access Journals (Sweden)

    Alejandro Reyes-Amaro

    2012-05-01

    Full Text Available In this paper we present details about the implementation of computer vision based applications for the rehabilitation of patients with motor disabilities. The applications are conceived as serious games, where the computer-patient interaction during playing contributes to the development of different motor skills. The use of computer vision methods allows the automatic guidance of the patient’s movements making constant specialized supervision unnecessary. The hardware requirements are limited to low-cost devices like usual webcams and Netbooks.

  2. Optimal motion planning using navigation measure

    Science.gov (United States)

    Vaidya, Umesh

    2018-05-01

    We introduce navigation measure as a new tool to solve the motion planning problem in the presence of static obstacles. Existence of navigation measure guarantees collision-free convergence at the final destination set beginning with almost every initial condition with respect to the Lebesgue measure. Navigation measure can be viewed as a dual to the navigation function. While the navigation function has its minimum at the final destination set and peaks at the obstacle set, navigation measure takes the maximum value at the destination set and is zero at the obstacle set. A linear programming formalism is proposed for the construction of navigation measure. Set-oriented numerical methods are utilised to obtain finite dimensional approximation of this navigation measure. Application of the proposed navigation measure-based theoretical and computational framework is demonstrated for a motion planning problem in a complex fluid flow.

  3. Dataflow-Based Mapping of Computer Vision Algorithms onto FPGAs

    Directory of Open Access Journals (Sweden)

    Ivan Corretjer

    2007-01-01

    Full Text Available We develop a design methodology for mapping computer vision algorithms onto an FPGA through the use of coarse-grain reconfigurable dataflow graphs as a representation to guide the designer. We first describe a new dataflow modeling technique called homogeneous parameterized dataflow (HPDF, which effectively captures the structure of an important class of computer vision applications. This form of dynamic dataflow takes advantage of the property that in a large number of image processing applications, data production and consumption rates can vary, but are equal across dataflow graph edges for any particular application iteration. After motivating and defining the HPDF model of computation, we develop an HPDF-based design methodology that offers useful properties in terms of verifying correctness and exposing performance-enhancing transformations; we discuss and address various challenges in efficiently mapping an HPDF-based application representation into target-specific HDL code; and we present experimental results pertaining to the mapping of a gesture recognition application onto the Xilinx Virtex II FPGA.

  4. Usability Testing of Two Ambulatory EHR Navigators.

    Science.gov (United States)

    Hultman, Gretchen; Marquard, Jenna; Arsoniadis, Elliot; Mink, Pamela; Rizvi, Rubina; Ramer, Tim; Khairat, Saif; Fickau, Keri; Melton, Genevieve B

    2016-01-01

    Despite widespread electronic health record (EHR) adoption, poor EHR system usability continues to be a significant barrier to effective system use for end users. One key to addressing usability problems is to employ user testing and user-centered design. To understand if redesigning an EHR-based navigation tool with clinician input improved user performance and satisfaction. A usability evaluation was conducted to compare two versions of a redesigned ambulatory navigator. Participants completed tasks for five patient cases using the navigators, while employing a think-aloud protocol. The tasks were based on Meaningful Use (MU) requirements. The version of navigator did not affect perceived workload, and time to complete tasks was longer in the redesigned navigator. A relatively small portion of navigator content was used to complete the MU-related tasks, though navigation patterns were highly variable across participants for both navigators. Preferences for EHR navigation structures appeared to be individualized. This study demonstrates the importance of EHR usability assessments to evaluate group and individual performance of different interfaces and preferences for each design.

  5. Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision.

    Science.gov (United States)

    Hagelsteen, Kristine; Langegård, Anders; Lantz, Adam; Ekelund, Mikael; Anderberg, Magnus; Bergenfelz, Anders

    2017-10-01

    The study investigated whether 3D vision and haptic feedback in combination in a virtual reality environment leads to more efficient learning of laparoscopic skills in novices. Twenty novices were allocated to two groups. All completed a training course in the LapSim ® virtual reality trainer consisting of four tasks: 'instrument navigation', 'grasping', 'fine dissection' and 'suturing'. The study group performed with haptic feedback and 3D vision and the control group without. Before and after the LapSim ® course, the participants' metrics were recorded when tying a laparoscopic knot in the 2D video box trainer Simball ® Box. The study group completed the training course in 146 (100-291) minutes compared to 215 (175-489) minutes in the control group (p = .002). The number of attempts to reach proficiency was significantly lower. The study group had significantly faster learning of skills in three out of four individual tasks; instrument navigation, grasping and suturing. Using the Simball ® Box, no difference in laparoscopic knot tying after the LapSim ® course was noted when comparing the groups. Laparoscopic training in virtual reality with 3D vision and haptic feedback made training more time efficient and did not negatively affect later video box-performance in 2D. [Formula: see text].

  6. DEVELOPMENT OF A PEDESTRIAN INDOOR NAVIGATION SYSTEM BASED ON MULTI-SENSOR FUSION AND FUZZY LOGIC ESTIMATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Y. C. Lai

    2015-05-01

    Full Text Available This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS. There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system

  7. Development of a Pedestrian Indoor Navigation System Based on Multi-Sensor Fusion and Fuzzy Logic Estimation Algorithms

    Science.gov (United States)

    Lai, Y. C.; Chang, C. C.; Tsai, C. M.; Lin, S. Y.; Huang, S. C.

    2015-05-01

    This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU) has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS). There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system to extend its

  8. Vision-based Engagement Detection in Virtual Reality

    OpenAIRE

    Tofighi, Ghassem; Raahemifar, Kaamraan; Frank, Maria; Gu, Haisong

    2016-01-01

    User engagement modeling for manipulating actions in vision-based interfaces is one of the most important case studies of user mental state detection. In a Virtual Reality environment that employs camera sensors to recognize human activities, we have to know when user intends to perform an action and when not. Without a proper algorithm for recognizing engagement status, any kind of activities could be interpreted as manipulating actions, called "Midas Touch" problem. Baseline approach for so...

  9. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms

    Directory of Open Access Journals (Sweden)

    Dashan Zhang

    2016-04-01

    Full Text Available The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  10. GPU-accelerated 3-D model-based tracking

    International Nuclear Information System (INIS)

    Brown, J Anthony; Capson, David W

    2010-01-01

    Model-based approaches to tracking the pose of a 3-D object in video are effective but computationally demanding. While statistical estimation techniques, such as the particle filter, are often employed to minimize the search space, real-time performance remains unachievable on current generation CPUs. Recent advances in graphics processing units (GPUs) have brought massively parallel computational power to the desktop environment and powerful developer tools, such as NVIDIA Compute Unified Device Architecture (CUDA), have provided programmers with a mechanism to exploit it. NVIDIA GPUs' single-instruction multiple-thread (SIMT) programming model is well-suited to many computer vision tasks, particularly model-based tracking, which requires several hundred 3-D model poses to be dynamically configured, rendered, and evaluated against each frame in the video sequence. Using 6 degree-of-freedom (DOF) rigid hand tracking as an example application, this work harnesses consumer-grade GPUs to achieve real-time, 3-D model-based, markerless object tracking in monocular video.

  11. Flight Test Result for the Ground-Based Radio Navigation System Sensor with an Unmanned Air Vehicle.

    Science.gov (United States)

    Jang, Jaegyu; Ahn, Woo-Guen; Seo, Seungwoo; Lee, Jang Yong; Park, Jun-Pyo

    2015-11-11

    The Ground-based Radio Navigation System (GRNS) is an alternative/backup navigation system based on time synchronized pseudolites. It has been studied for some years due to the potential vulnerability issue of satellite navigation systems (e.g., GPS or Galileo). In the framework of our study, a periodic pulsed sequence was used instead of the randomized pulse sequence recommended as the RTCM (radio technical commission for maritime services) SC (special committee)-104 pseudolite signal, as a randomized pulse sequence with a long dwell time is not suitable for applications requiring high dynamics. This paper introduces a mathematical model of the post-correlation output in a navigation sensor, showing that the aliasing caused by the additional frequency term of a periodic pulsed signal leads to a false lock (i.e., Doppler frequency bias) during the signal acquisition process or in the carrier tracking loop of the navigation sensor. We suggest algorithms to resolve the frequency false lock issue in this paper, relying on the use of a multi-correlator. A flight test with an unmanned helicopter was conducted to verify the implemented navigation sensor. The results of this analysis show that there were no false locks during the flight test and that outliers stem from bad dilution of precision (DOP) or fluctuations in the received signal quality.

  12. Personal and organisational vision supporting leadership in a team-based transport environment

    Directory of Open Access Journals (Sweden)

    Theuns F.J. Oosthuizen

    2012-11-01

    Full Text Available Leadership in an operational environment requires operational employees to take on responsibility as leaders. This leadership role could vary from self-leadership to team leadership with personal and organisational vision as key drivers for operational leadership performance. The research population included operational employees working in a transport environment who attended a leadership development seminar. A census was conducted using a questionnaire-based empirical research approach. Data analysis was conducted using SPSS, and the results were analysed. Responses indicate the development of an awareness of the importance of values and vision in order to establish effective leadership practices through the leadership development programme. Research confirmed the importance of vision as a key driver in operational leadership in this context. Further skill development is required on how to align personal values and vision with that of the organisation (department within which operational employees function.

  13. An Analysis of CONUS Based Deployment of Pseudolites for Positioning, Navigation and Timing (PNT) Systems

    Science.gov (United States)

    2015-09-17

    incorporate a master “ time - keeper ” or reference transmitter that other receiver/transmitters can synchronize with. Currently, GPS provides about 1 m...AN ANALYSIS OF CONUS BASED DEPLOYMENT OF PSEUDOLITES FOR POSITIONING, NAVIGATION AND TIMING (PNT...NAVIGATION AND TIMING (PNT) SYSTEMS THESIS Presented to the Faculty Department of Systems Engineering and Management Graduate School of

  14. The attack navigator

    DEFF Research Database (Denmark)

    Probst, Christian W.; Willemson, Jan; Pieters, Wolter

    2016-01-01

    The need to assess security and take protection decisions is at least as old as our civilisation. However, the complexity and development speed of our interconnected technical systems have surpassed our capacity to imagine and evaluate risk scenarios. This holds in particular for risks...... that are caused by the strategic behaviour of adversaries. Therefore, technology-supported methods are needed to help us identify and manage these risks. In this paper, we describe the attack navigator: a graph-based approach to security risk assessment inspired by navigation systems. Based on maps of a socio...

  15. Micro Vision

    OpenAIRE

    Ohba, Kohtaro; Ohara, Kenichi

    2007-01-01

    In the field of the micro vision, there are few researches compared with macro environment. However, applying to the study result for macro computer vision technique, you can measure and observe the micro environment. Moreover, based on the effects of micro environment, it is possible to discovery the new theories and new techniques.

  16. Industrial vision

    DEFF Research Database (Denmark)

    Knudsen, Ole

    1998-01-01

    This dissertation is concerned with the introduction of vision-based application s in the ship building industry. The industrial research project is divided into a natural seq uence of developments, from basic theoretical projective image generation via CAD and subpixel analysis to a description...... is present ed, and the variability of the parameters is examined and described. The concept of using CAD together with vision information is based on the fact that all items processed at OSS have an associated complete 3D CAD model that is accessible at all production states. This concept gives numerous...... possibilities for using vision in applications which otherwise would be very difficult to automate. The requirement for low tolerances in production is, despite the huge dimensions of the items involved, extreme. This fact makes great demands on the ability to do robust sub pixel estimation. A new method based...

  17. Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters.

    Science.gov (United States)

    Song, Jin Woo; Park, Chan Gook

    2018-04-21

    An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms.

  18. Design of all-weather celestial navigation system

    Science.gov (United States)

    Sun, Hongchi; Mu, Rongjun; Du, Huajun; Wu, Peng

    2018-03-01

    In order to realize autonomous navigation in the atmosphere, an all-weather celestial navigation system is designed. The research of celestial navigation system include discrimination method of comentropy and the adaptive navigation algorithm based on the P value. The discrimination method of comentropy is studied to realize the independent switching of two celestial navigation modes, starlight and radio. Finally, an adaptive filtering algorithm based on P value is proposed, which can greatly improve the disturbance rejection capability of the system. The experimental results show that the accuracy of the three axis attitude is better than 10″, and it can work all weather. In perturbation environment, the position accuracy of the integrated navigation system can be increased 20% comparing with the traditional method. It basically meets the requirements of the all-weather celestial navigation system, and it has the ability of stability, reliability, high accuracy and strong anti-interference.

  19. A Single RF Emitter-Based Indoor Navigation Method for Autonomous Service Robots.

    Science.gov (United States)

    Sherwin, Tyrone; Easte, Mikala; Chen, Andrew Tzer-Yeu; Wang, Kevin I-Kai; Dai, Wenbin

    2018-02-14

    Location-aware services are one of the key elements of modern intelligent applications. Numerous real-world applications such as factory automation, indoor delivery, and even search and rescue scenarios require autonomous robots to have the ability to navigate in an unknown environment and reach mobile targets with minimal or no prior infrastructure deployment. This research investigates and proposes a novel approach of dynamic target localisation using a single RF emitter, which will be used as the basis of allowing autonomous robots to navigate towards and reach a target. Through the use of multiple directional antennae, Received Signal Strength (RSS) is compared to determine the most probable direction of the targeted emitter, which is combined with the distance estimates to improve the localisation performance. The accuracy of the position estimate is further improved using a particle filter to mitigate the fluctuating nature of real-time RSS data. Based on the direction information, a motion control algorithm is proposed, using Simultaneous Localisation and Mapping (SLAM) and A* path planning to enable navigation through unknown complex environments. A number of navigation scenarios were developed in the context of factory automation applications to demonstrate and evaluate the functionality and performance of the proposed system.

  20. A Single RF Emitter-Based Indoor Navigation Method for Autonomous Service Robots

    Directory of Open Access Journals (Sweden)

    Tyrone Sherwin

    2018-02-01

    Full Text Available Location-aware services are one of the key elements of modern intelligent applications. Numerous real-world applications such as factory automation, indoor delivery, and even search and rescue scenarios require autonomous robots to have the ability to navigate in an unknown environment and reach mobile targets with minimal or no prior infrastructure deployment. This research investigates and proposes a novel approach of dynamic target localisation using a single RF emitter, which will be used as the basis of allowing autonomous robots to navigate towards and reach a target. Through the use of multiple directional antennae, Received Signal Strength (RSS is compared to determine the most probable direction of the targeted emitter, which is combined with the distance estimates to improve the localisation performance. The accuracy of the position estimate is further improved using a particle filter to mitigate the fluctuating nature of real-time RSS data. Based on the direction information, a motion control algorithm is proposed, using Simultaneous Localisation and Mapping (SLAM and A* path planning to enable navigation through unknown complex environments. A number of navigation scenarios were developed in the context of factory automation applications to demonstrate and evaluate the functionality and performance of the proposed system.